Test Report: KVM_Linux_crio 19046

                    
                      fb148a11d8032b35b0d9cd6893af3c5921ed4428:2024-06-10:34835
                    
                

Test fail (31/317)

Order failed test Duration
30 TestAddons/parallel/Ingress 153.31
32 TestAddons/parallel/MetricsServer 358.94
45 TestAddons/StoppedEnableDisable 154.27
164 TestMultiControlPlane/serial/StopSecondaryNode 141.81
166 TestMultiControlPlane/serial/RestartSecondaryNode 61.28
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 365.1
171 TestMultiControlPlane/serial/StopCluster 141.75
172 TestMultiControlPlane/serial/RestartCluster 651.55
174 TestMultiControlPlane/serial/AddSecondaryNode 140.74
230 TestMultiNode/serial/RestartKeepsNodes 304.61
232 TestMultiNode/serial/StopMultiNode 141.39
239 TestPreload 272.41
247 TestKubernetesUpgrade 441.88
289 TestStartStop/group/old-k8s-version/serial/FirstStart 265.47
297 TestStartStop/group/embed-certs/serial/Stop 139.11
300 TestStartStop/group/no-preload/serial/Stop 139.16
301 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
303 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 102.82
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/old-k8s-version/serial/SecondStart 697.3
314 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.01
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.55
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.54
319 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 541.8
320 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.93
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 391.85
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 395.3
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 187.51
349 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 167.33
x
+
TestAddons/parallel/Ingress (153.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-021732 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-021732 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-021732 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8105de8a-be57-47d3-ade8-89321c7029b7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8105de8a-be57-47d3-ade8-89321c7029b7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003565301s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-021732 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.766620251s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-021732 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.244
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-021732 addons disable ingress-dns --alsologtostderr -v=1: (1.599171716s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-021732 addons disable ingress --alsologtostderr -v=1: (7.953429363s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-021732 -n addons-021732
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-021732 logs -n 25: (1.270258555s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-938190 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | -p download-only-938190                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-938190                                                                     | download-only-938190 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-996636                                                                     | download-only-996636 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-938190                                                                     | download-only-938190 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-775609 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | binary-mirror-775609                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34103                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-775609                                                                     | binary-mirror-775609 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | addons-021732                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | addons-021732                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-021732 --wait=true                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | -p addons-021732                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | addons-021732                                                                               |                      |         |         |                     |                     |
	| addons  | addons-021732 addons disable                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-021732 ip                                                                            | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	| addons  | addons-021732 addons disable                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | -p addons-021732                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-021732 ssh curl -s                                                                   | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-021732 ssh cat                                                                       | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | /opt/local-path-provisioner/pvc-be3afae5-1392-4466-a1db-28b1c658ba01_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-021732 addons disable                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | addons-021732                                                                               |                      |         |         |                     |                     |
	| addons  | addons-021732 addons                                                                        | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:25 UTC | 10 Jun 24 10:25 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021732 addons                                                                        | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:25 UTC | 10 Jun 24 10:25 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-021732 ip                                                                            | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:26 UTC | 10 Jun 24 10:26 UTC |
	| addons  | addons-021732 addons disable                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:26 UTC | 10 Jun 24 10:26 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-021732 addons disable                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:26 UTC | 10 Jun 24 10:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:21:49
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:21:49.316066   11511 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:21:49.316303   11511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:21:49.316312   11511 out.go:304] Setting ErrFile to fd 2...
	I0610 10:21:49.316316   11511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:21:49.316522   11511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:21:49.317167   11511 out.go:298] Setting JSON to false
	I0610 10:21:49.317958   11511 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":250,"bootTime":1718014659,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:21:49.318018   11511 start.go:139] virtualization: kvm guest
	I0610 10:21:49.320049   11511 out.go:177] * [addons-021732] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:21:49.321469   11511 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:21:49.322696   11511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:21:49.321485   11511 notify.go:220] Checking for updates...
	I0610 10:21:49.325037   11511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:21:49.326175   11511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:21:49.327541   11511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 10:21:49.328744   11511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:21:49.330331   11511 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:21:49.361160   11511 out.go:177] * Using the kvm2 driver based on user configuration
	I0610 10:21:49.362267   11511 start.go:297] selected driver: kvm2
	I0610 10:21:49.362283   11511 start.go:901] validating driver "kvm2" against <nil>
	I0610 10:21:49.362297   11511 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:21:49.363073   11511 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:21:49.363176   11511 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 10:21:49.377551   11511 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 10:21:49.377596   11511 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 10:21:49.377798   11511 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:21:49.377848   11511 cni.go:84] Creating CNI manager for ""
	I0610 10:21:49.377860   11511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 10:21:49.377867   11511 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:21:49.377911   11511 start.go:340] cluster config:
	{Name:addons-021732 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-021732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:21:49.378011   11511 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:21:49.383533   11511 out.go:177] * Starting "addons-021732" primary control-plane node in "addons-021732" cluster
	I0610 10:21:49.384833   11511 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:21:49.384867   11511 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 10:21:49.384874   11511 cache.go:56] Caching tarball of preloaded images
	I0610 10:21:49.384978   11511 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:21:49.384991   11511 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:21:49.385307   11511 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/config.json ...
	I0610 10:21:49.385329   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/config.json: {Name:mke7f6b1ae5b13865ef37639a6a871ad9f6270b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:21:49.385472   11511 start.go:360] acquireMachinesLock for addons-021732: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:21:49.385529   11511 start.go:364] duration metric: took 40.705µs to acquireMachinesLock for "addons-021732"
	I0610 10:21:49.385553   11511 start.go:93] Provisioning new machine with config: &{Name:addons-021732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-021732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:21:49.385611   11511 start.go:125] createHost starting for "" (driver="kvm2")
	I0610 10:21:49.387155   11511 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 10:21:49.387272   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:21:49.387305   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:21:49.402043   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38969
	I0610 10:21:49.402432   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:21:49.403003   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:21:49.403027   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:21:49.403329   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:21:49.403523   11511 main.go:141] libmachine: (addons-021732) Calling .GetMachineName
	I0610 10:21:49.403640   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:21:49.403854   11511 start.go:159] libmachine.API.Create for "addons-021732" (driver="kvm2")
	I0610 10:21:49.403875   11511 client.go:168] LocalClient.Create starting
	I0610 10:21:49.403908   11511 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 10:21:49.581205   11511 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 10:21:49.632176   11511 main.go:141] libmachine: Running pre-create checks...
	I0610 10:21:49.632201   11511 main.go:141] libmachine: (addons-021732) Calling .PreCreateCheck
	I0610 10:21:49.632780   11511 main.go:141] libmachine: (addons-021732) Calling .GetConfigRaw
	I0610 10:21:49.633269   11511 main.go:141] libmachine: Creating machine...
	I0610 10:21:49.633283   11511 main.go:141] libmachine: (addons-021732) Calling .Create
	I0610 10:21:49.633451   11511 main.go:141] libmachine: (addons-021732) Creating KVM machine...
	I0610 10:21:49.634744   11511 main.go:141] libmachine: (addons-021732) DBG | found existing default KVM network
	I0610 10:21:49.635659   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:49.635496   11534 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014730}
	I0610 10:21:49.635735   11511 main.go:141] libmachine: (addons-021732) DBG | created network xml: 
	I0610 10:21:49.635759   11511 main.go:141] libmachine: (addons-021732) DBG | <network>
	I0610 10:21:49.635771   11511 main.go:141] libmachine: (addons-021732) DBG |   <name>mk-addons-021732</name>
	I0610 10:21:49.635790   11511 main.go:141] libmachine: (addons-021732) DBG |   <dns enable='no'/>
	I0610 10:21:49.635802   11511 main.go:141] libmachine: (addons-021732) DBG |   
	I0610 10:21:49.635817   11511 main.go:141] libmachine: (addons-021732) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0610 10:21:49.635832   11511 main.go:141] libmachine: (addons-021732) DBG |     <dhcp>
	I0610 10:21:49.635849   11511 main.go:141] libmachine: (addons-021732) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0610 10:21:49.635862   11511 main.go:141] libmachine: (addons-021732) DBG |     </dhcp>
	I0610 10:21:49.635874   11511 main.go:141] libmachine: (addons-021732) DBG |   </ip>
	I0610 10:21:49.635886   11511 main.go:141] libmachine: (addons-021732) DBG |   
	I0610 10:21:49.635896   11511 main.go:141] libmachine: (addons-021732) DBG | </network>
	I0610 10:21:49.635907   11511 main.go:141] libmachine: (addons-021732) DBG | 
	I0610 10:21:49.641171   11511 main.go:141] libmachine: (addons-021732) DBG | trying to create private KVM network mk-addons-021732 192.168.39.0/24...
	I0610 10:21:49.706056   11511 main.go:141] libmachine: (addons-021732) DBG | private KVM network mk-addons-021732 192.168.39.0/24 created
	I0610 10:21:49.706112   11511 main.go:141] libmachine: (addons-021732) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732 ...
	I0610 10:21:49.706129   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:49.706030   11534 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:21:49.706150   11511 main.go:141] libmachine: (addons-021732) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 10:21:49.706246   11511 main.go:141] libmachine: (addons-021732) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 10:21:49.956545   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:49.956439   11534 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa...
	I0610 10:21:50.098561   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:50.098413   11534 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/addons-021732.rawdisk...
	I0610 10:21:50.098591   11511 main.go:141] libmachine: (addons-021732) DBG | Writing magic tar header
	I0610 10:21:50.098601   11511 main.go:141] libmachine: (addons-021732) DBG | Writing SSH key tar header
	I0610 10:21:50.098608   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:50.098524   11534 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732 ...
	I0610 10:21:50.098619   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732
	I0610 10:21:50.098651   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 10:21:50.098665   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732 (perms=drwx------)
	I0610 10:21:50.098675   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:21:50.098685   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 10:21:50.098691   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 10:21:50.098698   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins
	I0610 10:21:50.098703   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home
	I0610 10:21:50.098709   11511 main.go:141] libmachine: (addons-021732) DBG | Skipping /home - not owner
	I0610 10:21:50.098721   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 10:21:50.098734   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 10:21:50.098773   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 10:21:50.098802   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 10:21:50.098812   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 10:21:50.098817   11511 main.go:141] libmachine: (addons-021732) Creating domain...
	I0610 10:21:50.099756   11511 main.go:141] libmachine: (addons-021732) define libvirt domain using xml: 
	I0610 10:21:50.099781   11511 main.go:141] libmachine: (addons-021732) <domain type='kvm'>
	I0610 10:21:50.099792   11511 main.go:141] libmachine: (addons-021732)   <name>addons-021732</name>
	I0610 10:21:50.099800   11511 main.go:141] libmachine: (addons-021732)   <memory unit='MiB'>4000</memory>
	I0610 10:21:50.099809   11511 main.go:141] libmachine: (addons-021732)   <vcpu>2</vcpu>
	I0610 10:21:50.099816   11511 main.go:141] libmachine: (addons-021732)   <features>
	I0610 10:21:50.099826   11511 main.go:141] libmachine: (addons-021732)     <acpi/>
	I0610 10:21:50.099836   11511 main.go:141] libmachine: (addons-021732)     <apic/>
	I0610 10:21:50.099848   11511 main.go:141] libmachine: (addons-021732)     <pae/>
	I0610 10:21:50.099863   11511 main.go:141] libmachine: (addons-021732)     
	I0610 10:21:50.099874   11511 main.go:141] libmachine: (addons-021732)   </features>
	I0610 10:21:50.099882   11511 main.go:141] libmachine: (addons-021732)   <cpu mode='host-passthrough'>
	I0610 10:21:50.099891   11511 main.go:141] libmachine: (addons-021732)   
	I0610 10:21:50.099931   11511 main.go:141] libmachine: (addons-021732)   </cpu>
	I0610 10:21:50.099942   11511 main.go:141] libmachine: (addons-021732)   <os>
	I0610 10:21:50.099955   11511 main.go:141] libmachine: (addons-021732)     <type>hvm</type>
	I0610 10:21:50.099966   11511 main.go:141] libmachine: (addons-021732)     <boot dev='cdrom'/>
	I0610 10:21:50.100011   11511 main.go:141] libmachine: (addons-021732)     <boot dev='hd'/>
	I0610 10:21:50.100042   11511 main.go:141] libmachine: (addons-021732)     <bootmenu enable='no'/>
	I0610 10:21:50.100050   11511 main.go:141] libmachine: (addons-021732)   </os>
	I0610 10:21:50.100057   11511 main.go:141] libmachine: (addons-021732)   <devices>
	I0610 10:21:50.100063   11511 main.go:141] libmachine: (addons-021732)     <disk type='file' device='cdrom'>
	I0610 10:21:50.100075   11511 main.go:141] libmachine: (addons-021732)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/boot2docker.iso'/>
	I0610 10:21:50.100082   11511 main.go:141] libmachine: (addons-021732)       <target dev='hdc' bus='scsi'/>
	I0610 10:21:50.100089   11511 main.go:141] libmachine: (addons-021732)       <readonly/>
	I0610 10:21:50.100094   11511 main.go:141] libmachine: (addons-021732)     </disk>
	I0610 10:21:50.100101   11511 main.go:141] libmachine: (addons-021732)     <disk type='file' device='disk'>
	I0610 10:21:50.100118   11511 main.go:141] libmachine: (addons-021732)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 10:21:50.100139   11511 main.go:141] libmachine: (addons-021732)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/addons-021732.rawdisk'/>
	I0610 10:21:50.100150   11511 main.go:141] libmachine: (addons-021732)       <target dev='hda' bus='virtio'/>
	I0610 10:21:50.100161   11511 main.go:141] libmachine: (addons-021732)     </disk>
	I0610 10:21:50.100169   11511 main.go:141] libmachine: (addons-021732)     <interface type='network'>
	I0610 10:21:50.100177   11511 main.go:141] libmachine: (addons-021732)       <source network='mk-addons-021732'/>
	I0610 10:21:50.100183   11511 main.go:141] libmachine: (addons-021732)       <model type='virtio'/>
	I0610 10:21:50.100189   11511 main.go:141] libmachine: (addons-021732)     </interface>
	I0610 10:21:50.100195   11511 main.go:141] libmachine: (addons-021732)     <interface type='network'>
	I0610 10:21:50.100202   11511 main.go:141] libmachine: (addons-021732)       <source network='default'/>
	I0610 10:21:50.100207   11511 main.go:141] libmachine: (addons-021732)       <model type='virtio'/>
	I0610 10:21:50.100212   11511 main.go:141] libmachine: (addons-021732)     </interface>
	I0610 10:21:50.100217   11511 main.go:141] libmachine: (addons-021732)     <serial type='pty'>
	I0610 10:21:50.100227   11511 main.go:141] libmachine: (addons-021732)       <target port='0'/>
	I0610 10:21:50.100236   11511 main.go:141] libmachine: (addons-021732)     </serial>
	I0610 10:21:50.100246   11511 main.go:141] libmachine: (addons-021732)     <console type='pty'>
	I0610 10:21:50.100252   11511 main.go:141] libmachine: (addons-021732)       <target type='serial' port='0'/>
	I0610 10:21:50.100262   11511 main.go:141] libmachine: (addons-021732)     </console>
	I0610 10:21:50.100287   11511 main.go:141] libmachine: (addons-021732)     <rng model='virtio'>
	I0610 10:21:50.100309   11511 main.go:141] libmachine: (addons-021732)       <backend model='random'>/dev/random</backend>
	I0610 10:21:50.100318   11511 main.go:141] libmachine: (addons-021732)     </rng>
	I0610 10:21:50.100325   11511 main.go:141] libmachine: (addons-021732)     
	I0610 10:21:50.100331   11511 main.go:141] libmachine: (addons-021732)     
	I0610 10:21:50.100341   11511 main.go:141] libmachine: (addons-021732)   </devices>
	I0610 10:21:50.100351   11511 main.go:141] libmachine: (addons-021732) </domain>
	I0610 10:21:50.100360   11511 main.go:141] libmachine: (addons-021732) 
	I0610 10:21:50.106211   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:e2:5e:51 in network default
	I0610 10:21:50.107734   11511 main.go:141] libmachine: (addons-021732) Ensuring networks are active...
	I0610 10:21:50.107758   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:50.108454   11511 main.go:141] libmachine: (addons-021732) Ensuring network default is active
	I0610 10:21:50.108728   11511 main.go:141] libmachine: (addons-021732) Ensuring network mk-addons-021732 is active
	I0610 10:21:50.109215   11511 main.go:141] libmachine: (addons-021732) Getting domain xml...
	I0610 10:21:50.109907   11511 main.go:141] libmachine: (addons-021732) Creating domain...
	I0610 10:21:51.349978   11511 main.go:141] libmachine: (addons-021732) Waiting to get IP...
	I0610 10:21:51.350666   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:51.351052   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:51.351077   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:51.351028   11534 retry.go:31] will retry after 227.859894ms: waiting for machine to come up
	I0610 10:21:51.580389   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:51.580808   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:51.580842   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:51.580770   11534 retry.go:31] will retry after 377.61731ms: waiting for machine to come up
	I0610 10:21:51.960306   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:51.960650   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:51.960684   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:51.960632   11534 retry.go:31] will retry after 425.397308ms: waiting for machine to come up
	I0610 10:21:52.387234   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:52.387657   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:52.387686   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:52.387631   11534 retry.go:31] will retry after 383.080459ms: waiting for machine to come up
	I0610 10:21:52.772105   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:52.772489   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:52.772514   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:52.772452   11534 retry.go:31] will retry after 606.763353ms: waiting for machine to come up
	I0610 10:21:53.381987   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:53.382481   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:53.382514   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:53.382428   11534 retry.go:31] will retry after 758.641117ms: waiting for machine to come up
	I0610 10:21:54.143101   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:54.143489   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:54.143510   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:54.143460   11534 retry.go:31] will retry after 1.125193015s: waiting for machine to come up
	I0610 10:21:55.270444   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:55.270880   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:55.270914   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:55.270850   11534 retry.go:31] will retry after 1.115970155s: waiting for machine to come up
	I0610 10:21:56.388121   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:56.388519   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:56.388545   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:56.388486   11534 retry.go:31] will retry after 1.346495635s: waiting for machine to come up
	I0610 10:21:57.736834   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:57.737297   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:57.737325   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:57.737234   11534 retry.go:31] will retry after 1.420732083s: waiting for machine to come up
	I0610 10:21:59.159782   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:59.160224   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:59.160253   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:59.160159   11534 retry.go:31] will retry after 2.590877904s: waiting for machine to come up
	I0610 10:22:01.754009   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:01.754437   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:22:01.754463   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:22:01.754388   11534 retry.go:31] will retry after 3.42062392s: waiting for machine to come up
	I0610 10:22:05.176466   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:05.176856   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:22:05.176881   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:22:05.176803   11534 retry.go:31] will retry after 4.163744632s: waiting for machine to come up
	I0610 10:22:09.345304   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.345784   11511 main.go:141] libmachine: (addons-021732) Found IP for machine: 192.168.39.244
	I0610 10:22:09.345820   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has current primary IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.345831   11511 main.go:141] libmachine: (addons-021732) Reserving static IP address...
	I0610 10:22:09.346208   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find host DHCP lease matching {name: "addons-021732", mac: "52:54:00:70:72:ae", ip: "192.168.39.244"} in network mk-addons-021732
	I0610 10:22:09.417862   11511 main.go:141] libmachine: (addons-021732) DBG | Getting to WaitForSSH function...
	I0610 10:22:09.417939   11511 main.go:141] libmachine: (addons-021732) Reserved static IP address: 192.168.39.244
	I0610 10:22:09.417959   11511 main.go:141] libmachine: (addons-021732) Waiting for SSH to be available...
	I0610 10:22:09.420832   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.421379   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:09.421410   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.421757   11511 main.go:141] libmachine: (addons-021732) DBG | Using SSH client type: external
	I0610 10:22:09.421782   11511 main.go:141] libmachine: (addons-021732) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa (-rw-------)
	I0610 10:22:09.421818   11511 main.go:141] libmachine: (addons-021732) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:22:09.421833   11511 main.go:141] libmachine: (addons-021732) DBG | About to run SSH command:
	I0610 10:22:09.421846   11511 main.go:141] libmachine: (addons-021732) DBG | exit 0
	I0610 10:22:09.553527   11511 main.go:141] libmachine: (addons-021732) DBG | SSH cmd err, output: <nil>: 
	I0610 10:22:09.553801   11511 main.go:141] libmachine: (addons-021732) KVM machine creation complete!
	I0610 10:22:09.554174   11511 main.go:141] libmachine: (addons-021732) Calling .GetConfigRaw
	I0610 10:22:09.554749   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:09.554953   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:09.555226   11511 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 10:22:09.555246   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:09.556806   11511 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 10:22:09.556824   11511 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 10:22:09.556840   11511 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 10:22:09.556849   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:09.559569   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.559929   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:09.559955   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.560096   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:09.560302   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.560469   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.560613   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:09.560777   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:09.561022   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:09.561038   11511 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 10:22:09.660271   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:22:09.660292   11511 main.go:141] libmachine: Detecting the provisioner...
	I0610 10:22:09.660299   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:09.663173   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.663594   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:09.663630   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.663845   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:09.664042   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.664220   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.664345   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:09.664515   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:09.664717   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:09.664733   11511 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 10:22:09.765402   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 10:22:09.765480   11511 main.go:141] libmachine: found compatible host: buildroot
	I0610 10:22:09.765494   11511 main.go:141] libmachine: Provisioning with buildroot...
	I0610 10:22:09.765508   11511 main.go:141] libmachine: (addons-021732) Calling .GetMachineName
	I0610 10:22:09.765725   11511 buildroot.go:166] provisioning hostname "addons-021732"
	I0610 10:22:09.765749   11511 main.go:141] libmachine: (addons-021732) Calling .GetMachineName
	I0610 10:22:09.765929   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:09.768370   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.768711   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:09.768738   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.768867   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:09.769046   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.769209   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.769337   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:09.769495   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:09.769702   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:09.769722   11511 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-021732 && echo "addons-021732" | sudo tee /etc/hostname
	I0610 10:22:09.889179   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-021732
	
	I0610 10:22:09.889217   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:09.892330   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.892660   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:09.892700   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.892888   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:09.893099   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.893299   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.893456   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:09.893635   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:09.893793   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:09.893808   11511 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-021732' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-021732/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-021732' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:22:10.005934   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:22:10.005964   11511 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:22:10.006014   11511 buildroot.go:174] setting up certificates
	I0610 10:22:10.006034   11511 provision.go:84] configureAuth start
	I0610 10:22:10.006052   11511 main.go:141] libmachine: (addons-021732) Calling .GetMachineName
	I0610 10:22:10.006391   11511 main.go:141] libmachine: (addons-021732) Calling .GetIP
	I0610 10:22:10.009526   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.009931   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.009953   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.010070   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:10.012193   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.012556   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.012582   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.012747   11511 provision.go:143] copyHostCerts
	I0610 10:22:10.012847   11511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:22:10.013010   11511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:22:10.013093   11511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:22:10.013160   11511 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.addons-021732 san=[127.0.0.1 192.168.39.244 addons-021732 localhost minikube]
	I0610 10:22:10.130372   11511 provision.go:177] copyRemoteCerts
	I0610 10:22:10.130433   11511 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:22:10.130455   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:10.133258   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.133608   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.133630   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.133786   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:10.133957   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.134132   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:10.134273   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:10.214993   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:22:10.237877   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 10:22:10.260926   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 10:22:10.284159   11511 provision.go:87] duration metric: took 278.109655ms to configureAuth
	I0610 10:22:10.284186   11511 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:22:10.284343   11511 config.go:182] Loaded profile config "addons-021732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:22:10.284406   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:10.287363   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.287723   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.287751   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.287899   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:10.288121   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.288322   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.288471   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:10.288643   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:10.288814   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:10.288831   11511 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:22:10.834878   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:22:10.834911   11511 main.go:141] libmachine: Checking connection to Docker...
	I0610 10:22:10.834923   11511 main.go:141] libmachine: (addons-021732) Calling .GetURL
	I0610 10:22:10.836450   11511 main.go:141] libmachine: (addons-021732) DBG | Using libvirt version 6000000
	I0610 10:22:10.838766   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.839129   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.839172   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.839314   11511 main.go:141] libmachine: Docker is up and running!
	I0610 10:22:10.839326   11511 main.go:141] libmachine: Reticulating splines...
	I0610 10:22:10.839334   11511 client.go:171] duration metric: took 21.435451924s to LocalClient.Create
	I0610 10:22:10.839361   11511 start.go:167] duration metric: took 21.435501976s to libmachine.API.Create "addons-021732"
	I0610 10:22:10.839373   11511 start.go:293] postStartSetup for "addons-021732" (driver="kvm2")
	I0610 10:22:10.839390   11511 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:22:10.839412   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:10.839654   11511 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:22:10.839676   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:10.841993   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.842280   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.842296   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.842457   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:10.842624   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.842797   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:10.842945   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:10.923172   11511 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:22:10.927454   11511 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:22:10.927481   11511 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:22:10.927551   11511 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:22:10.927573   11511 start.go:296] duration metric: took 88.191201ms for postStartSetup
	I0610 10:22:10.927602   11511 main.go:141] libmachine: (addons-021732) Calling .GetConfigRaw
	I0610 10:22:10.928177   11511 main.go:141] libmachine: (addons-021732) Calling .GetIP
	I0610 10:22:10.930881   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.931294   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.931314   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.931643   11511 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/config.json ...
	I0610 10:22:10.931868   11511 start.go:128] duration metric: took 21.546245786s to createHost
	I0610 10:22:10.931894   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:10.934754   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.935163   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.935194   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.935379   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:10.935559   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.935742   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.935864   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:10.936020   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:10.936180   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:10.936190   11511 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:22:11.037977   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718014930.997399018
	
	I0610 10:22:11.038002   11511 fix.go:216] guest clock: 1718014930.997399018
	I0610 10:22:11.038011   11511 fix.go:229] Guest: 2024-06-10 10:22:10.997399018 +0000 UTC Remote: 2024-06-10 10:22:10.931882063 +0000 UTC m=+21.648444948 (delta=65.516955ms)
	I0610 10:22:11.038060   11511 fix.go:200] guest clock delta is within tolerance: 65.516955ms
	I0610 10:22:11.038068   11511 start.go:83] releasing machines lock for "addons-021732", held for 21.652524556s
	I0610 10:22:11.038096   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:11.038405   11511 main.go:141] libmachine: (addons-021732) Calling .GetIP
	I0610 10:22:11.040989   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.041443   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:11.041471   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.041604   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:11.042090   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:11.042310   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:11.042413   11511 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:22:11.042452   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:11.042535   11511 ssh_runner.go:195] Run: cat /version.json
	I0610 10:22:11.042551   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:11.044973   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.045049   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.045383   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:11.045416   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:11.045439   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.045500   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.045586   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:11.045788   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:11.045790   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:11.045927   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:11.046000   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:11.046043   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:11.046103   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:11.046234   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:11.169207   11511 ssh_runner.go:195] Run: systemctl --version
	I0610 10:22:11.175146   11511 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:22:11.340855   11511 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:22:11.346606   11511 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:22:11.346664   11511 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:22:11.362850   11511 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 10:22:11.362877   11511 start.go:494] detecting cgroup driver to use...
	I0610 10:22:11.362936   11511 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:22:11.379694   11511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:22:11.393162   11511 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:22:11.393215   11511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:22:11.409101   11511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:22:11.422412   11511 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:22:11.531476   11511 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:22:11.671983   11511 docker.go:233] disabling docker service ...
	I0610 10:22:11.672061   11511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:22:11.685151   11511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:22:11.697547   11511 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:22:11.808530   11511 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:22:11.926000   11511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:22:11.939925   11511 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:22:11.957031   11511 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:22:11.957101   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:11.967189   11511 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:22:11.967259   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:11.977385   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:11.987234   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:11.997028   11511 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:22:12.008455   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:12.019735   11511 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:12.035735   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:12.045595   11511 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:22:12.055221   11511 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 10:22:12.055286   11511 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 10:22:12.068098   11511 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:22:12.077742   11511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:22:12.194498   11511 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:22:12.324870   11511 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:22:12.324974   11511 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:22:12.329373   11511 start.go:562] Will wait 60s for crictl version
	I0610 10:22:12.329460   11511 ssh_runner.go:195] Run: which crictl
	I0610 10:22:12.332853   11511 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:22:12.371659   11511 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:22:12.371781   11511 ssh_runner.go:195] Run: crio --version
	I0610 10:22:12.397154   11511 ssh_runner.go:195] Run: crio --version
	I0610 10:22:12.427999   11511 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:22:12.429770   11511 main.go:141] libmachine: (addons-021732) Calling .GetIP
	I0610 10:22:12.432457   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:12.432818   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:12.432846   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:12.433055   11511 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:22:12.437030   11511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:22:12.449056   11511 kubeadm.go:877] updating cluster {Name:addons-021732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-021732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 10:22:12.449162   11511 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:22:12.449206   11511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:22:12.480175   11511 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0610 10:22:12.480258   11511 ssh_runner.go:195] Run: which lz4
	I0610 10:22:12.483870   11511 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 10:22:12.487927   11511 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 10:22:12.487967   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0610 10:22:13.679875   11511 crio.go:462] duration metric: took 1.196047703s to copy over tarball
	I0610 10:22:13.679944   11511 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 10:22:15.956372   11511 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.276395309s)
	I0610 10:22:15.956403   11511 crio.go:469] duration metric: took 2.276502967s to extract the tarball
	I0610 10:22:15.956412   11511 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 10:22:15.992742   11511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:22:16.031848   11511 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:22:16.031870   11511 cache_images.go:84] Images are preloaded, skipping loading
	I0610 10:22:16.031878   11511 kubeadm.go:928] updating node { 192.168.39.244 8443 v1.30.1 crio true true} ...
	I0610 10:22:16.031969   11511 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-021732 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-021732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:22:16.032032   11511 ssh_runner.go:195] Run: crio config
	I0610 10:22:16.080464   11511 cni.go:84] Creating CNI manager for ""
	I0610 10:22:16.080482   11511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 10:22:16.080490   11511 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 10:22:16.080510   11511 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.244 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-021732 NodeName:addons-021732 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 10:22:16.080644   11511 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-021732"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 10:22:16.080716   11511 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:22:16.090288   11511 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 10:22:16.090368   11511 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 10:22:16.099170   11511 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0610 10:22:16.114472   11511 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:22:16.129333   11511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0610 10:22:16.144593   11511 ssh_runner.go:195] Run: grep 192.168.39.244	control-plane.minikube.internal$ /etc/hosts
	I0610 10:22:16.148173   11511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:22:16.159292   11511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:22:16.291407   11511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:22:16.307764   11511 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732 for IP: 192.168.39.244
	I0610 10:22:16.307790   11511 certs.go:194] generating shared ca certs ...
	I0610 10:22:16.307809   11511 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.307987   11511 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:22:16.360498   11511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt ...
	I0610 10:22:16.360527   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt: {Name:mka5aee245599ed1c73a6589e4bd7041817accf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.360720   11511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key ...
	I0610 10:22:16.360737   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key: {Name:mke821ebc9a1f87cafb59cae5dc616ee25e2a67c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.360837   11511 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:22:16.414996   11511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt ...
	I0610 10:22:16.415020   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt: {Name:mk1eb7154d51413f36bfe7ec5ebca9175f12c53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.415195   11511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key ...
	I0610 10:22:16.415212   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key: {Name:mk3d8b84fe579a4f2beabd4c3f73806adb29637d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.415318   11511 certs.go:256] generating profile certs ...
	I0610 10:22:16.415393   11511 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.key
	I0610 10:22:16.415414   11511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt with IP's: []
	I0610 10:22:16.600910   11511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt ...
	I0610 10:22:16.600941   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: {Name:mk6ee51d7a9f9a0656ea660e6de93886eb2d79ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.601128   11511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.key ...
	I0610 10:22:16.601145   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.key: {Name:mkb4515cd87b5353d29e229ea3c778e43a085bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.601249   11511 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key.56f9b0b4
	I0610 10:22:16.601270   11511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt.56f9b0b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244]
	I0610 10:22:16.648074   11511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt.56f9b0b4 ...
	I0610 10:22:16.648110   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt.56f9b0b4: {Name:mk2f86f460b055062ad012cbb6ae1733f96777ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.648304   11511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key.56f9b0b4 ...
	I0610 10:22:16.648323   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key.56f9b0b4: {Name:mkdea12d61593f69a56bef54ad06acc161e91f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.648420   11511 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt.56f9b0b4 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt
	I0610 10:22:16.648514   11511 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key.56f9b0b4 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key
	I0610 10:22:16.648580   11511 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.key
	I0610 10:22:16.648606   11511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.crt with IP's: []
	I0610 10:22:16.866970   11511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.crt ...
	I0610 10:22:16.867008   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.crt: {Name:mk267f1b3cdb4c073d022895f7afa4a7c60f29d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.867228   11511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.key ...
	I0610 10:22:16.867251   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.key: {Name:mk702199a561852b7205391efdfb13e22bee7cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.867505   11511 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:22:16.867545   11511 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:22:16.867581   11511 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:22:16.867624   11511 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:22:16.868242   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:22:16.892285   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:22:16.914376   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:22:16.936371   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:22:16.958373   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0610 10:22:16.982406   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 10:22:17.017509   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:22:17.044204   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 10:22:17.066098   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:22:17.087984   11511 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 10:22:17.103883   11511 ssh_runner.go:195] Run: openssl version
	I0610 10:22:17.109446   11511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:22:17.119696   11511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:22:17.123615   11511 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:22:17.123668   11511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:22:17.129098   11511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:22:17.138759   11511 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:22:17.142392   11511 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 10:22:17.142441   11511 kubeadm.go:391] StartCluster: {Name:addons-021732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-021732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:22:17.142535   11511 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 10:22:17.142584   11511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 10:22:17.176071   11511 cri.go:89] found id: ""
	I0610 10:22:17.176133   11511 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 10:22:17.185884   11511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 10:22:17.194886   11511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 10:22:17.203675   11511 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 10:22:17.203703   11511 kubeadm.go:156] found existing configuration files:
	
	I0610 10:22:17.203748   11511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 10:22:17.212071   11511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 10:22:17.212139   11511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 10:22:17.220828   11511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 10:22:17.229346   11511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 10:22:17.229413   11511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 10:22:17.238080   11511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 10:22:17.246321   11511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 10:22:17.246376   11511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 10:22:17.254950   11511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 10:22:17.263256   11511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 10:22:17.263317   11511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 10:22:17.271922   11511 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 10:22:17.333616   11511 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 10:22:17.333697   11511 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 10:22:17.462068   11511 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 10:22:17.462205   11511 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 10:22:17.462426   11511 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 10:22:17.653282   11511 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 10:22:17.812077   11511 out.go:204]   - Generating certificates and keys ...
	I0610 10:22:17.812183   11511 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 10:22:17.812253   11511 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 10:22:18.044566   11511 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 10:22:18.349721   11511 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 10:22:18.621027   11511 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 10:22:18.898324   11511 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 10:22:19.050267   11511 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 10:22:19.050408   11511 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-021732 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0610 10:22:19.192253   11511 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 10:22:19.192427   11511 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-021732 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0610 10:22:19.301659   11511 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 10:22:19.644535   11511 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 10:22:19.908664   11511 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 10:22:19.908825   11511 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 10:22:20.134821   11511 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 10:22:20.421465   11511 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 10:22:20.546558   11511 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 10:22:20.770192   11511 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 10:22:20.888676   11511 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 10:22:20.889201   11511 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 10:22:20.892246   11511 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 10:22:20.894114   11511 out.go:204]   - Booting up control plane ...
	I0610 10:22:20.894210   11511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 10:22:20.894284   11511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 10:22:20.894894   11511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 10:22:20.910491   11511 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 10:22:20.911472   11511 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 10:22:20.911524   11511 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 10:22:21.039434   11511 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 10:22:21.039571   11511 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 10:22:21.541408   11511 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.892392ms
	I0610 10:22:21.541492   11511 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 10:22:28.044380   11511 kubeadm.go:309] [api-check] The API server is healthy after 6.501040579s
	I0610 10:22:28.056064   11511 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 10:22:28.073006   11511 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 10:22:28.099386   11511 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 10:22:28.099647   11511 kubeadm.go:309] [mark-control-plane] Marking the node addons-021732 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 10:22:28.111389   11511 kubeadm.go:309] [bootstrap-token] Using token: u7nktn.l02ueaavloy4yy05
	I0610 10:22:28.113155   11511 out.go:204]   - Configuring RBAC rules ...
	I0610 10:22:28.113302   11511 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 10:22:28.121519   11511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 10:22:28.134963   11511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 10:22:28.138911   11511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 10:22:28.142459   11511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 10:22:28.145784   11511 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 10:22:28.449504   11511 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 10:22:28.902660   11511 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 10:22:29.450218   11511 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 10:22:29.450243   11511 kubeadm.go:309] 
	I0610 10:22:29.450337   11511 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 10:22:29.450361   11511 kubeadm.go:309] 
	I0610 10:22:29.450453   11511 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 10:22:29.450470   11511 kubeadm.go:309] 
	I0610 10:22:29.450519   11511 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 10:22:29.450601   11511 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 10:22:29.450682   11511 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 10:22:29.450692   11511 kubeadm.go:309] 
	I0610 10:22:29.450777   11511 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 10:22:29.450792   11511 kubeadm.go:309] 
	I0610 10:22:29.450867   11511 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 10:22:29.450877   11511 kubeadm.go:309] 
	I0610 10:22:29.450949   11511 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 10:22:29.451044   11511 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 10:22:29.451130   11511 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 10:22:29.451140   11511 kubeadm.go:309] 
	I0610 10:22:29.451263   11511 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 10:22:29.451370   11511 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 10:22:29.451382   11511 kubeadm.go:309] 
	I0610 10:22:29.451484   11511 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token u7nktn.l02ueaavloy4yy05 \
	I0610 10:22:29.451604   11511 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 10:22:29.451636   11511 kubeadm.go:309] 	--control-plane 
	I0610 10:22:29.451647   11511 kubeadm.go:309] 
	I0610 10:22:29.451751   11511 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 10:22:29.451760   11511 kubeadm.go:309] 
	I0610 10:22:29.451870   11511 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token u7nktn.l02ueaavloy4yy05 \
	I0610 10:22:29.452040   11511 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 10:22:29.452144   11511 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 10:22:29.452161   11511 cni.go:84] Creating CNI manager for ""
	I0610 10:22:29.452170   11511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 10:22:29.454095   11511 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 10:22:29.455437   11511 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 10:22:29.465453   11511 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 10:22:29.486680   11511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 10:22:29.486773   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:29.486877   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-021732 minikube.k8s.io/updated_at=2024_06_10T10_22_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=addons-021732 minikube.k8s.io/primary=true
	I0610 10:22:29.533644   11511 ops.go:34] apiserver oom_adj: -16
	I0610 10:22:29.648721   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:30.149074   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:30.648868   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:31.149416   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:31.649453   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:32.149201   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:32.649429   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:33.149343   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:33.649097   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:34.149718   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:34.648840   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:35.149571   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:35.648972   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:36.149208   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:36.649767   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:37.149623   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:37.648852   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:38.149733   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:38.649139   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:39.149383   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:39.648827   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:40.149366   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:40.648838   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:41.148981   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:41.649674   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:42.149418   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:42.248475   11511 kubeadm.go:1107] duration metric: took 12.761770799s to wait for elevateKubeSystemPrivileges
	W0610 10:22:42.248513   11511 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 10:22:42.248523   11511 kubeadm.go:393] duration metric: took 25.106086137s to StartCluster
	I0610 10:22:42.248544   11511 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:42.248667   11511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:22:42.249143   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:42.249366   11511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 10:22:42.249388   11511 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:22:42.251164   11511 out.go:177] * Verifying Kubernetes components...
	I0610 10:22:42.249439   11511 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0610 10:22:42.251247   11511 addons.go:69] Setting yakd=true in profile "addons-021732"
	I0610 10:22:42.251261   11511 addons.go:69] Setting cloud-spanner=true in profile "addons-021732"
	I0610 10:22:42.251273   11511 addons.go:69] Setting registry=true in profile "addons-021732"
	I0610 10:22:42.251283   11511 addons.go:234] Setting addon yakd=true in "addons-021732"
	I0610 10:22:42.251289   11511 addons.go:234] Setting addon cloud-spanner=true in "addons-021732"
	I0610 10:22:42.251287   11511 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-021732"
	I0610 10:22:42.251301   11511 addons.go:69] Setting inspektor-gadget=true in profile "addons-021732"
	I0610 10:22:42.251314   11511 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-021732"
	I0610 10:22:42.251317   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251322   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251327   11511 addons.go:234] Setting addon inspektor-gadget=true in "addons-021732"
	I0610 10:22:42.251329   11511 addons.go:69] Setting storage-provisioner=true in profile "addons-021732"
	I0610 10:22:42.251349   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251344   11511 addons.go:69] Setting volcano=true in profile "addons-021732"
	I0610 10:22:42.251353   11511 addons.go:234] Setting addon storage-provisioner=true in "addons-021732"
	I0610 10:22:42.251384   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251388   11511 addons.go:234] Setting addon volcano=true in "addons-021732"
	I0610 10:22:42.251430   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.249634   11511 config.go:182] Loaded profile config "addons-021732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:22:42.251745   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.251749   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.251758   11511 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-021732"
	I0610 10:22:42.251762   11511 addons.go:69] Setting gcp-auth=true in profile "addons-021732"
	I0610 10:22:42.251760   11511 addons.go:69] Setting volumesnapshots=true in profile "addons-021732"
	I0610 10:22:42.251768   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251777   11511 mustload.go:65] Loading cluster: addons-021732
	I0610 10:22:42.251774   11511 addons.go:69] Setting helm-tiller=true in profile "addons-021732"
	I0610 10:22:42.251782   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251744   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.251798   11511 addons.go:234] Setting addon helm-tiller=true in "addons-021732"
	I0610 10:22:42.251801   11511 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-021732"
	I0610 10:22:42.251804   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251817   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251823   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251821   11511 addons.go:69] Setting default-storageclass=true in profile "addons-021732"
	I0610 10:22:42.251845   11511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-021732"
	I0610 10:22:42.251784   11511 addons.go:234] Setting addon volumesnapshots=true in "addons-021732"
	I0610 10:22:42.257630   11511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:22:42.251745   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.251250   11511 addons.go:69] Setting ingress-dns=true in profile "addons-021732"
	I0610 10:22:42.257776   11511 addons.go:234] Setting addon ingress-dns=true in "addons-021732"
	I0610 10:22:42.257821   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251761   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.257892   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251925   11511 config.go:182] Loaded profile config "addons-021732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:22:42.251940   11511 addons.go:69] Setting ingress=true in profile "addons-021732"
	I0610 10:22:42.258072   11511 addons.go:234] Setting addon ingress=true in "addons-021732"
	I0610 10:22:42.258113   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251294   11511 addons.go:234] Setting addon registry=true in "addons-021732"
	I0610 10:22:42.258312   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.258336   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.258340   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.258448   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.258471   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251956   11511 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-021732"
	I0610 10:22:42.258565   11511 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-021732"
	I0610 10:22:42.258589   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.258677   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.251965   11511 addons.go:69] Setting metrics-server=true in profile "addons-021732"
	I0610 10:22:42.252138   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.252174   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.252193   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.252195   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.257735   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.258718   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251990   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.258788   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.258864   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.258910   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.258918   11511 addons.go:234] Setting addon metrics-server=true in "addons-021732"
	I0610 10:22:42.258936   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.259044   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.259064   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.259219   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.259246   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.264602   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.265001   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.265050   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.272999   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0610 10:22:42.273483   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.274166   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.274186   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.278753   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33759
	I0610 10:22:42.279469   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.279481   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0610 10:22:42.279844   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.280071   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.280091   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.280441   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.280650   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0610 10:22:42.280999   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.281036   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.281070   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.281443   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.281463   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.281607   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.281617   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.281768   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.281998   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.289261   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.289308   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.289402   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.289438   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.289446   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.289466   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.293520   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.293607   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38867
	I0610 10:22:42.293724   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41717
	I0610 10:22:42.293798   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39599
	I0610 10:22:42.293856   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I0610 10:22:42.294420   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.294454   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.300302   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.300355   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.300424   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.301006   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.301024   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.301087   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.301559   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.301580   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.301652   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.301779   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.301788   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.301846   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.301959   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.301969   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.302173   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.302464   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.302799   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.302888   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.302949   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.303016   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.303835   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.303924   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.309234   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.309625   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.309670   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.317136   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0610 10:22:42.317854   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.319685   11511 addons.go:234] Setting addon default-storageclass=true in "addons-021732"
	I0610 10:22:42.319731   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.320157   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.320189   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.321015   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.321037   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.321439   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.321988   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.322025   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.322255   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0610 10:22:42.322852   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.323348   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.323364   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.323714   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.324237   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.324273   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.326995   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0610 10:22:42.327401   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.327843   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.327860   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.328192   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.328733   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.328767   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.328985   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0610 10:22:42.329545   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.330146   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.330172   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.330562   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.331198   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.331815   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36565
	I0610 10:22:42.332380   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.332895   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.332916   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.333262   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.333321   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.333601   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:42.333616   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:42.333818   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:42.333846   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:42.333867   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:42.333869   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.333879   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:42.333889   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:42.333904   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.334122   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:42.334140   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	W0610 10:22:42.334242   11511 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0610 10:22:42.347603   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0610 10:22:42.348140   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.348725   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.348742   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.349163   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.349387   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.349989   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46293
	I0610 10:22:42.350634   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.351119   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.351138   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.351452   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.351812   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.353965   11511 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-021732"
	I0610 10:22:42.354010   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.354376   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.354415   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.355376   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40833
	I0610 10:22:42.355499   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32809
	I0610 10:22:42.355589   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34981
	I0610 10:22:42.355708   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0610 10:22:42.355778   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.356082   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.357719   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0610 10:22:42.356465   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.356512   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34521
	I0610 10:22:42.356560   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.356792   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.357181   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.359156   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0610 10:22:42.359169   11511 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0610 10:22:42.359188   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.359334   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.359963   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.360090   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.360104   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.360123   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.360135   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.360759   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.363153   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.364927   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.365015   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.365067   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.365093   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.365119   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.365152   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41507
	I0610 10:22:42.365290   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.365358   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.365422   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I0610 10:22:42.365608   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.365629   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.365757   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.365941   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.366375   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.366416   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.366541   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.366617   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.366713   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.367118   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.367132   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.367459   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.367533   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.367553   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.367899   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.367921   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.368102   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.368672   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.368728   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.369478   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.369516   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.370254   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.372758   11511 out.go:177]   - Using image docker.io/registry:2.8.3
	I0610 10:22:42.371588   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.373031   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0610 10:22:42.375297   11511 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0610 10:22:42.376566   11511 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0610 10:22:42.374649   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.374694   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.376309   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0610 10:22:42.376512   11511 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0610 10:22:42.377839   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0610 10:22:42.377861   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.377919   11511 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0610 10:22:42.377926   11511 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0610 10:22:42.377938   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.378739   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36175
	I0610 10:22:42.379014   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.379027   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.379155   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.379165   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.379348   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.379540   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.379597   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.379644   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.379772   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.382045   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.382062   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.382112   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40857
	I0610 10:22:42.382287   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.383034   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.383060   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.383087   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.383130   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.383214   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.383267   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.383865   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.383918   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.384494   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.384671   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.385572   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.385789   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.385810   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.387694   11511 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0610 10:22:42.386241   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.386342   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.386895   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.389064   11511 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0610 10:22:42.389085   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0610 10:22:42.389113   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.389191   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.389216   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.389300   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I0610 10:22:42.389311   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42675
	I0610 10:22:42.389453   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.389511   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.389529   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.389541   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.391152   11511 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0610 10:22:42.390037   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.390129   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.390521   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.390565   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.390705   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.392382   11511 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0610 10:22:42.392553   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.393177   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.394215   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43245
	I0610 10:22:42.394512   11511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 10:22:42.394458   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.394607   11511 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0610 10:22:42.396465   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.394653   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.396514   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.394717   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.395175   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.396564   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.395215   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.395256   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.395355   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.396923   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.396117   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.396408   11511 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:22:42.397028   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 10:22:42.397044   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.396590   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.397259   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.398769   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.398796   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.398953   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0610 10:22:42.397878   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.397893   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.398207   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.399854   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.401067   11511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0610 10:22:42.400135   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.400545   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.400753   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.401024   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.401367   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0610 10:22:42.401393   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.402145   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.402461   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.402763   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.404184   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.404280   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.404306   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.404470   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.405232   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0610 10:22:42.405331   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.405445   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36415
	I0610 10:22:42.405931   11511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0610 10:22:42.406177   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.406484   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.407359   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.407375   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0610 10:22:42.408564   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.408582   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.407462   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0610 10:22:42.407563   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.407855   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.407937   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.407971   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.408263   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.408988   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.410018   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0610 10:22:42.412147   11511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0610 10:22:42.410451   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.411034   11511 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0610 10:22:42.411348   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.411367   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.411382   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.411554   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.411658   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.413724   11511 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0610 10:22:42.414488   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0610 10:22:42.414507   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.415837   11511 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 10:22:42.415850   11511 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 10:22:42.415861   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.413913   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.415898   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.417071   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0610 10:22:42.414620   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.414759   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.416293   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.418367   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0610 10:22:42.416564   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.417373   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.417425   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.417492   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.421150   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0610 10:22:42.420077   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.420140   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.420355   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.421669   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.421696   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.422053   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.422422   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.422472   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.423537   11511 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0610 10:22:42.423565   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.423869   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.424611   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.426036   11511 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0610 10:22:42.426052   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0610 10:22:42.426063   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.427562   11511 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0610 10:22:42.423949   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.424631   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0610 10:22:42.424637   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.424828   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.425062   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0610 10:22:42.425276   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.428735   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.429107   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.429124   11511 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0610 10:22:42.429380   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.430226   11511 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0610 10:22:42.430257   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.430468   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.432824   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0610 10:22:42.430473   11511 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 10:22:42.430709   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.431491   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.431518   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0610 10:22:42.431663   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.431660   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.431679   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.434147   11511 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 10:22:42.434161   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0610 10:22:42.434681   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.435362   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.435488   11511 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 10:22:42.435501   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0610 10:22:42.435520   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.435575   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.435612   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.435700   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0610 10:22:42.435719   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.442249   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.442272   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.442284   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.442291   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.442312   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.442349   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.442504   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.442668   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.442711   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.442794   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.442941   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.444342   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.446306   11511 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0610 10:22:42.446308   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.444610   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.446336   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.446350   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.445575   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.447686   11511 out.go:177]   - Using image docker.io/busybox:stable
	I0610 10:22:42.445086   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.446373   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.446495   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.446629   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.446698   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.449010   11511 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0610 10:22:42.449030   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0610 10:22:42.447761   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.447801   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.449058   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.449076   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.447880   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.449109   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.448014   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.450026   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.450265   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.450593   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.450813   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.450976   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.452590   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.452996   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.453018   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.453313   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.453457   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.453604   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.453717   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.631538   11511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 10:22:42.631563   11511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:22:42.840669   11511 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 10:22:42.840692   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0610 10:22:42.868545   11511 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0610 10:22:42.868569   11511 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0610 10:22:42.895477   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0610 10:22:42.905161   11511 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0610 10:22:42.905187   11511 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0610 10:22:42.923174   11511 node_ready.go:35] waiting up to 6m0s for node "addons-021732" to be "Ready" ...
	I0610 10:22:42.926554   11511 node_ready.go:49] node "addons-021732" has status "Ready":"True"
	I0610 10:22:42.926576   11511 node_ready.go:38] duration metric: took 3.376822ms for node "addons-021732" to be "Ready" ...
	I0610 10:22:42.926583   11511 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:22:42.932885   11511 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jnxqr" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:42.961126   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0610 10:22:42.970642   11511 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0610 10:22:42.970671   11511 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0610 10:22:42.977915   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0610 10:22:42.993070   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 10:22:42.998860   11511 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0610 10:22:42.998880   11511 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0610 10:22:43.020779   11511 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0610 10:22:43.020799   11511 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0610 10:22:43.032088   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0610 10:22:43.032111   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0610 10:22:43.049458   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 10:22:43.054731   11511 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0610 10:22:43.054752   11511 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0610 10:22:43.064414   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:22:43.066140   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0610 10:22:43.084655   11511 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 10:22:43.084690   11511 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 10:22:43.142738   11511 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0610 10:22:43.142762   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0610 10:22:43.189500   11511 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0610 10:22:43.189529   11511 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0610 10:22:43.233642   11511 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0610 10:22:43.233667   11511 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0610 10:22:43.286609   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0610 10:22:43.286631   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0610 10:22:43.296691   11511 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0610 10:22:43.296712   11511 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0610 10:22:43.298001   11511 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0610 10:22:43.298018   11511 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0610 10:22:43.345105   11511 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 10:22:43.345134   11511 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 10:22:43.355667   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0610 10:22:43.358700   11511 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0610 10:22:43.358724   11511 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0610 10:22:43.450778   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0610 10:22:43.476495   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0610 10:22:43.476515   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0610 10:22:43.492222   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0610 10:22:43.492242   11511 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0610 10:22:43.496126   11511 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0610 10:22:43.496143   11511 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0610 10:22:43.530452   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 10:22:43.546331   11511 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0610 10:22:43.546353   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0610 10:22:43.603303   11511 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0610 10:22:43.603342   11511 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0610 10:22:43.637692   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0610 10:22:43.637713   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0610 10:22:43.672226   11511 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 10:22:43.672249   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0610 10:22:43.751557   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0610 10:22:43.785082   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0610 10:22:43.785107   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0610 10:22:43.838512   11511 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0610 10:22:43.838544   11511 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0610 10:22:43.876935   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 10:22:43.970330   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0610 10:22:43.970359   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0610 10:22:44.082890   11511 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.451313731s)
	I0610 10:22:44.082924   11511 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0610 10:22:44.230551   11511 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0610 10:22:44.230578   11511 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0610 10:22:44.397124   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0610 10:22:44.397149   11511 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0610 10:22:44.587186   11511 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-021732" context rescaled to 1 replicas
	I0610 10:22:44.601643   11511 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0610 10:22:44.601677   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0610 10:22:44.707722   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0610 10:22:44.707752   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0610 10:22:44.939893   11511 pod_ready.go:102] pod "coredns-7db6d8ff4d-jnxqr" in "kube-system" namespace has status "Ready":"False"
	I0610 10:22:44.968117   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0610 10:22:45.020032   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0610 10:22:45.020064   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0610 10:22:45.334174   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 10:22:45.334202   11511 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0610 10:22:45.479866   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 10:22:47.140850   11511 pod_ready.go:102] pod "coredns-7db6d8ff4d-jnxqr" in "kube-system" namespace has status "Ready":"False"
	I0610 10:22:47.478634   11511 pod_ready.go:92] pod "coredns-7db6d8ff4d-jnxqr" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.478654   11511 pod_ready.go:81] duration metric: took 4.545740316s for pod "coredns-7db6d8ff4d-jnxqr" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.478666   11511 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rx46l" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.525350   11511 pod_ready.go:92] pod "coredns-7db6d8ff4d-rx46l" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.525374   11511 pod_ready.go:81] duration metric: took 46.702228ms for pod "coredns-7db6d8ff4d-rx46l" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.525388   11511 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.597922   11511 pod_ready.go:92] pod "etcd-addons-021732" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.597951   11511 pod_ready.go:81] duration metric: took 72.544019ms for pod "etcd-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.597962   11511 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.690175   11511 pod_ready.go:92] pod "kube-apiserver-addons-021732" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.690196   11511 pod_ready.go:81] duration metric: took 92.228748ms for pod "kube-apiserver-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.690206   11511 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.745062   11511 pod_ready.go:92] pod "kube-controller-manager-addons-021732" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.745089   11511 pod_ready.go:81] duration metric: took 54.875224ms for pod "kube-controller-manager-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.745102   11511 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7846w" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.876761   11511 pod_ready.go:92] pod "kube-proxy-7846w" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.876787   11511 pod_ready.go:81] duration metric: took 131.677995ms for pod "kube-proxy-7846w" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.876803   11511 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:48.257894   11511 pod_ready.go:92] pod "kube-scheduler-addons-021732" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:48.257916   11511 pod_ready.go:81] duration metric: took 381.105399ms for pod "kube-scheduler-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:48.257924   11511 pod_ready.go:38] duration metric: took 5.331331023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:22:48.257938   11511 api_server.go:52] waiting for apiserver process to appear ...
	I0610 10:22:48.257997   11511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:22:49.497202   11511 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0610 10:22:49.497244   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:49.500209   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:49.500597   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:49.500625   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:49.500789   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:49.501030   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:49.501204   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:49.501340   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:49.649840   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.754319111s)
	I0610 10:22:49.649895   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.688738286s)
	I0610 10:22:49.649965   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.649982   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650010   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.656913321s)
	I0610 10:22:49.650029   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.600544568s)
	I0610 10:22:49.649963   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.672019554s)
	I0610 10:22:49.650058   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.649903   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650078   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650065   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650090   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650099   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.585663994s)
	I0610 10:22:49.650118   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650145   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650208   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.584042877s)
	I0610 10:22:49.650239   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650257   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650361   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.294650882s)
	I0610 10:22:49.650388   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650079   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650398   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.199585223s)
	I0610 10:22:49.650423   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650437   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650467   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.119986687s)
	I0610 10:22:49.650402   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650484   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650491   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650045   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650547   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.898959734s)
	I0610 10:22:49.650550   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650561   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650570   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652395   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652405   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652414   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652415   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652425   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.652432   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652483   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652489   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652497   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.652697   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652714   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652719   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652724   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652500   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652734   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.652746   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652754   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652755   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652755   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652804   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.652812   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652535   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655031   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655047   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.655055   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652537   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655111   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655126   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.652553   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652552   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652569   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652587   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652575   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655244   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655254   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.655262   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652605   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655303   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.655313   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652637   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655362   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655380   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.655391   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652656   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652676   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655407   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655416   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.655427   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652519   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.654588   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.654605   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.654608   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655641   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655663   11511 addons.go:475] Verifying addon registry=true in "addons-021732"
	I0610 10:22:49.654631   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.654628   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.654648   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.654964   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.654995   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655133   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.655322   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652620   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.655350   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.655998   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.656016   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.656096   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.656104   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.656108   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.656996   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.658143   11511 out.go:177] * Verifying registry addon...
	I0610 10:22:49.658168   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.658189   11511 addons.go:475] Verifying addon metrics-server=true in "addons-021732"
	I0610 10:22:49.658191   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.658190   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.658209   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.658249   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.658250   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.659647   11511 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-021732 service yakd-dashboard -n yakd-dashboard
	
	I0610 10:22:49.658161   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.661130   11511 addons.go:475] Verifying addon ingress=true in "addons-021732"
	I0610 10:22:49.658198   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.661164   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.661179   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.662567   11511 out.go:177] * Verifying ingress addon...
	I0610 10:22:49.658409   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.658538   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.658589   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.658404   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.661410   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.661469   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.661864   11511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0610 10:22:49.664128   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.665092   11511 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0610 10:22:49.665926   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.665954   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.701091   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.701111   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.701357   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.701378   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.701412   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	W0610 10:22:49.701482   11511 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0610 10:22:49.705302   11511 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0610 10:22:49.705327   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:49.707126   11511 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0610 10:22:49.707145   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:49.718340   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.718360   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.718675   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.718710   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.718727   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.928096   11511 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0610 10:22:50.007454   11511 addons.go:234] Setting addon gcp-auth=true in "addons-021732"
	I0610 10:22:50.007519   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:50.007814   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:50.007850   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:50.022912   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43173
	I0610 10:22:50.023391   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:50.023860   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:50.023888   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:50.024224   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:50.024736   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:50.024766   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:50.040973   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0610 10:22:50.041424   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:50.041889   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:50.041913   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:50.042278   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:50.042481   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:50.044253   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:50.044481   11511 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0610 10:22:50.044507   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:50.047170   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:50.047615   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:50.047646   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:50.047779   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:50.047977   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:50.048144   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:50.048315   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:50.257224   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:50.257352   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:50.368881   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.40072667s)
	I0610 10:22:50.368938   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:50.368969   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:50.368968   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.491960707s)
	W0610 10:22:50.369008   11511 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0610 10:22:50.369037   11511 retry.go:31] will retry after 331.98459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0610 10:22:50.369315   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:50.369336   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:50.369346   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:50.369355   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:50.369582   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:50.369588   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:50.369600   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:50.673908   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:50.674124   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:50.702062   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 10:22:51.171998   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:51.172513   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:51.676216   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:51.682391   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:52.177551   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:52.191357   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:52.369412   11511 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.111391758s)
	I0610 10:22:52.369445   11511 api_server.go:72] duration metric: took 10.120029627s to wait for apiserver process to appear ...
	I0610 10:22:52.369453   11511 api_server.go:88] waiting for apiserver healthz status ...
	I0610 10:22:52.369420   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.889510306s)
	I0610 10:22:52.369471   11511 api_server.go:253] Checking apiserver healthz at https://192.168.39.244:8443/healthz ...
	I0610 10:22:52.369484   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:52.369500   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:52.369483   11511 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.324980008s)
	I0610 10:22:52.371588   11511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0610 10:22:52.369785   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:52.369803   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:52.372751   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:52.372765   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:52.374034   11511 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0610 10:22:52.372776   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:52.375293   11511 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0610 10:22:52.375304   11511 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0610 10:22:52.375531   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:52.375545   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:52.375555   11511 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-021732"
	I0610 10:22:52.376861   11511 out.go:177] * Verifying csi-hostpath-driver addon...
	I0610 10:22:52.375533   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:52.378083   11511 api_server.go:279] https://192.168.39.244:8443/healthz returned 200:
	ok
	I0610 10:22:52.378721   11511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0610 10:22:52.380376   11511 api_server.go:141] control plane version: v1.30.1
	I0610 10:22:52.380394   11511 api_server.go:131] duration metric: took 10.936438ms to wait for apiserver health ...
	I0610 10:22:52.380401   11511 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 10:22:52.390060   11511 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 10:22:52.390080   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:52.398694   11511 system_pods.go:59] 18 kube-system pods found
	I0610 10:22:52.398728   11511 system_pods.go:61] "coredns-7db6d8ff4d-jnxqr" [698b6a09-55b9-4a70-8733-9c95667a8f2d] Running
	I0610 10:22:52.398735   11511 system_pods.go:61] "coredns-7db6d8ff4d-rx46l" [8198dacc-399a-413f-ba9c-1721544a3b9a] Running
	I0610 10:22:52.398745   11511 system_pods.go:61] "csi-hostpath-attacher-0" [d1cf1ab9-7f35-4dd9-aa47-9bc40f3875ad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 10:22:52.398754   11511 system_pods.go:61] "csi-hostpathplugin-9gl88" [9285d121-5350-4eb2-a327-bafaf090e4d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 10:22:52.398762   11511 system_pods.go:61] "etcd-addons-021732" [07cfab01-cdff-4d4b-bf7f-aec5026381cb] Running
	I0610 10:22:52.398766   11511 system_pods.go:61] "kube-apiserver-addons-021732" [f3743640-5f88-4d65-a5d6-178669bc90b9] Running
	I0610 10:22:52.398770   11511 system_pods.go:61] "kube-controller-manager-addons-021732" [d02e8430-937d-4dc5-acf6-d07ee42cdfc3] Running
	I0610 10:22:52.398775   11511 system_pods.go:61] "kube-ingress-dns-minikube" [3e396de4-1f67-49cc-8b15-180ef259e715] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 10:22:52.398782   11511 system_pods.go:61] "kube-proxy-7846w" [49d2baed-2c3e-4858-8479-918a31ae3835] Running
	I0610 10:22:52.398791   11511 system_pods.go:61] "kube-scheduler-addons-021732" [20886653-83d8-4491-9a79-a417565db2b5] Running
	I0610 10:22:52.398799   11511 system_pods.go:61] "metrics-server-c59844bb4-5lbmz" [9560fdac-7849-4123-9b3f-b4042539052c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 10:22:52.398805   11511 system_pods.go:61] "nvidia-device-plugin-daemonset-2zf77" [6e61695c-8992-480f-826d-23a9f83617e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0610 10:22:52.398814   11511 system_pods.go:61] "registry-proxy-lq94h" [4b7b9e8d-e9e9-450e-877e-156e3a37a859] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0610 10:22:52.398824   11511 system_pods.go:61] "registry-xmm5t" [50b19bb8-aabd-4c89-a304-877505b561a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0610 10:22:52.398837   11511 system_pods.go:61] "snapshot-controller-745499f584-8f7kt" [ed854b08-22d6-4798-b11a-1966d9e683c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 10:22:52.398849   11511 system_pods.go:61] "snapshot-controller-745499f584-qbgdf" [928f0775-d90f-49b2-9b80-727b1dfaca99] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 10:22:52.398859   11511 system_pods.go:61] "storage-provisioner" [93dd7c04-05d2-42a7-9762-bdb57fa30867] Running
	I0610 10:22:52.398867   11511 system_pods.go:61] "tiller-deploy-6677d64bcd-86c76" [3257a893-b201-4088-be48-fb02698a0350] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0610 10:22:52.398878   11511 system_pods.go:74] duration metric: took 18.470489ms to wait for pod list to return data ...
	I0610 10:22:52.398887   11511 default_sa.go:34] waiting for default service account to be created ...
	I0610 10:22:52.407980   11511 default_sa.go:45] found service account: "default"
	I0610 10:22:52.408002   11511 default_sa.go:55] duration metric: took 9.106206ms for default service account to be created ...
	I0610 10:22:52.408011   11511 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 10:22:52.434757   11511 system_pods.go:86] 19 kube-system pods found
	I0610 10:22:52.434784   11511 system_pods.go:89] "coredns-7db6d8ff4d-jnxqr" [698b6a09-55b9-4a70-8733-9c95667a8f2d] Running
	I0610 10:22:52.434789   11511 system_pods.go:89] "coredns-7db6d8ff4d-rx46l" [8198dacc-399a-413f-ba9c-1721544a3b9a] Running
	I0610 10:22:52.434796   11511 system_pods.go:89] "csi-hostpath-attacher-0" [d1cf1ab9-7f35-4dd9-aa47-9bc40f3875ad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 10:22:52.434802   11511 system_pods.go:89] "csi-hostpath-resizer-0" [d68124c2-14de-475d-92db-90cc6cef8080] Pending
	I0610 10:22:52.434813   11511 system_pods.go:89] "csi-hostpathplugin-9gl88" [9285d121-5350-4eb2-a327-bafaf090e4d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 10:22:52.434818   11511 system_pods.go:89] "etcd-addons-021732" [07cfab01-cdff-4d4b-bf7f-aec5026381cb] Running
	I0610 10:22:52.434823   11511 system_pods.go:89] "kube-apiserver-addons-021732" [f3743640-5f88-4d65-a5d6-178669bc90b9] Running
	I0610 10:22:52.434827   11511 system_pods.go:89] "kube-controller-manager-addons-021732" [d02e8430-937d-4dc5-acf6-d07ee42cdfc3] Running
	I0610 10:22:52.434833   11511 system_pods.go:89] "kube-ingress-dns-minikube" [3e396de4-1f67-49cc-8b15-180ef259e715] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 10:22:52.434838   11511 system_pods.go:89] "kube-proxy-7846w" [49d2baed-2c3e-4858-8479-918a31ae3835] Running
	I0610 10:22:52.434843   11511 system_pods.go:89] "kube-scheduler-addons-021732" [20886653-83d8-4491-9a79-a417565db2b5] Running
	I0610 10:22:52.434851   11511 system_pods.go:89] "metrics-server-c59844bb4-5lbmz" [9560fdac-7849-4123-9b3f-b4042539052c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 10:22:52.434858   11511 system_pods.go:89] "nvidia-device-plugin-daemonset-2zf77" [6e61695c-8992-480f-826d-23a9f83617e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0610 10:22:52.434867   11511 system_pods.go:89] "registry-proxy-lq94h" [4b7b9e8d-e9e9-450e-877e-156e3a37a859] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0610 10:22:52.434873   11511 system_pods.go:89] "registry-xmm5t" [50b19bb8-aabd-4c89-a304-877505b561a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0610 10:22:52.434881   11511 system_pods.go:89] "snapshot-controller-745499f584-8f7kt" [ed854b08-22d6-4798-b11a-1966d9e683c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 10:22:52.434887   11511 system_pods.go:89] "snapshot-controller-745499f584-qbgdf" [928f0775-d90f-49b2-9b80-727b1dfaca99] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 10:22:52.434894   11511 system_pods.go:89] "storage-provisioner" [93dd7c04-05d2-42a7-9762-bdb57fa30867] Running
	I0610 10:22:52.434900   11511 system_pods.go:89] "tiller-deploy-6677d64bcd-86c76" [3257a893-b201-4088-be48-fb02698a0350] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0610 10:22:52.434909   11511 system_pods.go:126] duration metric: took 26.892692ms to wait for k8s-apps to be running ...
	I0610 10:22:52.434916   11511 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 10:22:52.434957   11511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:22:52.463106   11511 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0610 10:22:52.463144   11511 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0610 10:22:52.550984   11511 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 10:22:52.551014   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0610 10:22:52.659044   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 10:22:52.672523   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:52.672580   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:52.814331   11511 system_svc.go:56] duration metric: took 379.407198ms WaitForService to wait for kubelet
	I0610 10:22:52.814360   11511 kubeadm.go:576] duration metric: took 10.564945233s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:22:52.814378   11511 node_conditions.go:102] verifying NodePressure condition ...
	I0610 10:22:52.814540   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.11242968s)
	I0610 10:22:52.814577   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:52.814594   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:52.814874   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:52.814901   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:52.814910   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:52.814918   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:52.815114   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:52.815131   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:52.817674   11511 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:22:52.817698   11511 node_conditions.go:123] node cpu capacity is 2
	I0610 10:22:52.817708   11511 node_conditions.go:105] duration metric: took 3.32605ms to run NodePressure ...
	I0610 10:22:52.817719   11511 start.go:240] waiting for startup goroutines ...
	I0610 10:22:52.884396   11511 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 10:22:52.884418   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:53.174185   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:53.177498   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:53.389443   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:53.677832   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:53.687884   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:53.898077   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.238991459s)
	I0610 10:22:53.898127   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:53.898142   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:53.898456   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:53.898475   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:53.898530   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:53.898545   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:53.898545   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:53.898556   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:53.898758   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:53.898775   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:53.900610   11511 addons.go:475] Verifying addon gcp-auth=true in "addons-021732"
	I0610 10:22:53.902522   11511 out.go:177] * Verifying gcp-auth addon...
	I0610 10:22:53.904583   11511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0610 10:22:53.918273   11511 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0610 10:22:53.918292   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:54.171654   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:54.171841   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:54.384432   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:54.408148   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:54.672485   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:54.672968   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:54.884581   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:54.907790   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:55.172557   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:55.172559   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:55.384775   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:55.407765   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:55.671746   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:55.672170   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:55.883931   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:55.907597   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:56.171561   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:56.171566   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:56.383752   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:56.407766   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:56.672150   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:56.672337   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:56.884198   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:56.908542   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:57.170838   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:57.171440   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:57.384051   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:57.407831   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:57.671791   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:57.672548   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:57.886612   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:57.908656   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:58.175898   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:58.190708   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:58.383802   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:58.411405   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:58.674273   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:58.675914   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:58.888036   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:58.908673   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:59.172833   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:59.173390   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:59.384366   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:59.408696   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:59.837161   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:59.841029   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:59.884889   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:59.908456   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:00.172198   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:00.172267   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:00.384633   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:00.409379   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:00.671502   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:00.672203   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:00.883750   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:00.909001   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:01.170972   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:01.170999   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:01.383457   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:01.409374   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:01.672489   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:01.672713   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:01.885372   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:01.908109   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:02.171811   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:02.171959   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:02.384745   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:02.408990   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:02.673193   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:02.675022   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:02.884717   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:02.909609   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:03.172151   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:03.172796   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:03.384432   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:03.408343   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:03.672316   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:03.673023   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:03.885523   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:03.909004   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:04.170985   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:04.173781   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:04.384467   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:04.408445   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:04.672149   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:04.672836   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:05.223028   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:05.223363   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:05.225389   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:05.227587   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:05.385246   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:05.408661   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:05.671038   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:05.671390   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:05.883795   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:05.908409   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:06.172173   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:06.172888   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:06.383705   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:06.408219   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:06.671221   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:06.671398   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:06.883856   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:06.908105   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:07.170377   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:07.170763   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:07.384458   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:07.408130   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:07.998014   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:07.998469   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:08.001596   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:08.002676   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:08.172127   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:08.174672   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:08.383359   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:08.408459   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:08.671454   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:08.671740   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:08.884590   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:08.907648   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:09.171174   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:09.173514   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:09.384631   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:09.407985   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:09.671397   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:09.671501   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:09.883982   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:09.909069   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:10.172729   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:10.173212   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:10.383979   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:10.408303   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:10.670979   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:10.671082   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:10.884147   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:10.908430   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:11.171590   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:11.172078   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:11.385004   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:11.407949   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:11.672754   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:11.673011   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:11.884847   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:11.908261   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:12.171143   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:12.171199   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:12.384583   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:12.408073   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:12.670620   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:12.670976   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:12.884174   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:12.908857   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:13.171436   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:13.171589   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:13.384247   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:13.894705   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:13.896741   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:13.896864   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:13.897162   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:13.913517   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:14.171270   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:14.171434   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:14.384241   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:14.408812   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:14.670031   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:14.670322   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:14.886371   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:14.916040   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:15.170285   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:15.173439   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:15.385153   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:15.408503   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:15.670745   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:15.671496   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:15.885613   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:15.908755   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:16.171608   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:16.172554   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:16.384535   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:16.407752   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:16.671453   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:16.671784   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:16.884422   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:16.908588   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:17.172138   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:17.172345   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:17.384452   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:17.408041   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:17.671187   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:17.672309   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:17.884669   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:17.907587   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:18.171326   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:18.172565   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:18.384800   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:18.408280   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:18.670696   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:18.670936   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:18.884513   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:18.907857   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:19.171373   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:19.172365   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:19.384661   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:19.410273   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:19.671510   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:19.671619   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:19.890663   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:19.907781   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:20.171191   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:20.173470   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:20.388113   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:20.408233   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:20.671363   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:20.671633   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:20.883853   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:20.909039   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:21.170480   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:21.170732   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:21.384885   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:21.407592   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:21.670904   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:21.671200   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:21.889410   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:21.908422   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:22.171912   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:22.172778   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:22.384170   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:22.408341   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:22.672003   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:22.672879   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:22.884276   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:22.908911   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:23.171608   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:23.171641   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:23.383758   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:23.408313   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:23.670989   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:23.671081   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:23.886101   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:23.907558   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:24.170876   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:24.171471   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:24.384464   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:24.408109   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:24.672684   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:24.673254   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:24.886841   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:24.908191   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:25.170027   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:25.170907   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:25.386280   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:25.408658   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:25.670846   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:25.671150   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:25.883727   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:25.908012   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:26.169901   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:26.171043   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:26.384262   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:26.408310   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:26.670374   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:26.670601   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:26.884534   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:26.914248   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:27.170024   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:27.171510   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:27.396067   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:27.408189   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:27.671692   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:27.671829   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:27.884223   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:27.908714   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:28.173702   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:28.175116   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:28.384243   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:28.409720   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:28.671275   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:28.671372   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:28.883880   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:28.908519   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:29.171484   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:29.171989   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:29.384166   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:29.408582   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:29.670221   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:29.671091   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:29.884096   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:29.910267   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:30.170287   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:30.170815   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:30.384201   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:30.408652   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:30.671017   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:30.671094   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:30.884318   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:30.908414   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:31.172996   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:31.174167   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:31.383897   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:31.408524   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:31.671355   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:31.671366   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:31.886649   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:31.915198   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:32.171683   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:32.171947   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:32.384216   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:32.408661   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:32.671008   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:32.672250   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:32.884518   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:32.908827   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:33.172377   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:33.172615   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:33.384900   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:33.408007   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:33.671417   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:33.671843   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:33.884659   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:33.908559   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:34.172084   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:34.172771   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:34.384308   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:34.408261   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:34.673076   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:34.673573   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:34.883640   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:34.907832   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:35.171975   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:35.172116   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:35.385551   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:35.409100   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:35.772417   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:35.772661   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:35.884404   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:35.909009   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:36.172106   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:36.172496   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:36.384238   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:36.408236   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:36.671055   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:36.671343   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:36.883982   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:36.908073   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:37.171393   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:37.172618   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:37.384357   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:37.408329   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:37.671982   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:37.672571   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:37.886299   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:37.908464   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:38.170652   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:38.170958   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:38.384842   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:38.408209   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:38.670114   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:38.671362   11511 kapi.go:107] duration metric: took 49.009497245s to wait for kubernetes.io/minikube-addons=registry ...
	I0610 10:23:38.884147   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:38.908317   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:39.170356   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:39.385541   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:39.408214   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:39.671234   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:39.883626   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:39.908317   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:40.170122   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:40.385156   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:40.407762   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:40.671079   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:40.925091   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:40.927065   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:41.170456   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:41.384814   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:41.407788   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:41.671879   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:42.209673   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:42.211757   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:42.212047   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:42.384288   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:42.408320   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:42.670782   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:42.883866   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:42.908293   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:43.170224   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:43.384967   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:43.408752   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:43.671107   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:43.885075   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:43.908530   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:44.171198   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:44.384761   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:44.408694   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:44.670036   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:44.884652   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:44.908840   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:45.171029   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:45.384024   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:45.408553   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:45.670798   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:45.890236   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:45.908781   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:46.170468   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:46.384005   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:46.408524   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:46.670289   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:46.883910   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:46.908571   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:47.170882   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:47.384031   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:47.408356   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:47.670041   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:47.887057   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:47.915200   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:48.169889   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:48.383997   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:48.408530   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:48.670012   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:48.884722   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:48.908146   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:49.170435   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:49.384228   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:49.409027   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:49.670446   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:49.883850   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:49.908516   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:50.171093   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:50.384298   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:50.408235   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:50.670393   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:50.887356   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:50.909859   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:51.487319   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:51.488110   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:51.489102   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:51.670096   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:51.884902   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:51.908010   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:52.169725   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:52.383715   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:52.407755   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:52.670039   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:52.884494   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:52.908283   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:53.172224   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:53.387937   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:53.411197   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:53.672613   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:53.888105   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:53.910988   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:54.169645   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:54.384731   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:54.409849   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:54.670829   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:54.885134   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:54.911376   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:55.170629   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:55.389245   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:55.407975   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:55.670903   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:55.884651   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:55.908573   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:56.171546   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:56.384694   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:56.408887   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:56.670549   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:56.883824   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:56.908001   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:57.170810   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:57.384755   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:57.408041   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:57.670241   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:57.889299   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:57.908776   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:58.171046   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:58.669636   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:58.670787   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:58.673486   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:58.885765   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:58.909582   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:59.171814   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:59.383849   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:59.408089   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:59.670091   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:59.884376   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:59.908753   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:00.170700   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:00.383760   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:00.408341   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:00.670303   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:00.885988   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:00.909215   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:01.170203   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:01.384917   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:01.408488   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:01.670794   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:01.883865   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:01.908066   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:02.169900   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:02.384380   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:02.408823   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:02.677775   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:02.884267   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:02.908231   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:03.170556   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:03.384547   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:03.408070   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:03.670388   11511 kapi.go:107] duration metric: took 1m14.005293415s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0610 10:24:03.886645   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:03.908262   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:04.385168   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:04.408853   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:04.891800   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:04.909627   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:05.383956   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:05.409019   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:05.884077   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:05.908308   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:06.384596   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:06.408649   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:06.885923   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:06.908734   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:07.384570   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:07.407939   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:07.884646   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:07.907860   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:08.383697   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:08.408439   11511 kapi.go:107] duration metric: took 1m14.503855428s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0610 10:24:08.410827   11511 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-021732 cluster.
	I0610 10:24:08.412379   11511 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0610 10:24:08.413802   11511 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0610 10:24:08.884881   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:09.385034   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:09.886134   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:10.385463   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:10.886586   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:11.395962   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:11.884316   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:12.384732   11511 kapi.go:107] duration metric: took 1m20.006008s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0610 10:24:12.386442   11511 out.go:177] * Enabled addons: helm-tiller, metrics-server, nvidia-device-plugin, storage-provisioner, yakd, cloud-spanner, ingress-dns, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0610 10:24:12.387697   11511 addons.go:510] duration metric: took 1m30.138256587s for enable addons: enabled=[helm-tiller metrics-server nvidia-device-plugin storage-provisioner yakd cloud-spanner ingress-dns default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0610 10:24:12.387737   11511 start.go:245] waiting for cluster config update ...
	I0610 10:24:12.387754   11511 start.go:254] writing updated cluster config ...
	I0610 10:24:12.387998   11511 ssh_runner.go:195] Run: rm -f paused
	I0610 10:24:12.441934   11511 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 10:24:12.443663   11511 out.go:177] * Done! kubectl is now configured to use "addons-021732" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.492504069Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718015228492474823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d58c9d2-ae9f-4da4-988a-997190870f40 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.493077084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d872154e-21ba-4dc0-a6f1-a06e0d0e0653 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.493138027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d872154e-21ba-4dc0-a6f1-a06e0d0e0653 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.493543336Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc22745643d6853b725260aed1b923e4584d8b14d0021f8f9b42a046e6c006fe,PodSandboxId:7e193706ef9096110b87737cbf61070b4684f0d86473e3a97d0d532143683b26,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718015221412470525,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-d88fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01572e27-a714-4633-aeea-7e662365ce75,},Annotations:map[string]string{io.kubernetes.container.hash: afd70b2c,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac27835e9bc37e01e901f96ca22c17fd5d02c7d3cc7abe3fb4ed6575a85ef8b,PodSandboxId:eff348790a47f8fccfe3d62e61d16d70653ec33b3f6cf8419aa3b33179bdeda1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718015081034721462,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8105de8a-be57-47d3-ade8-89321c7029b7,},Annotations:map[string]string{io.kubern
etes.container.hash: 73535256,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa2d32e1d9b0154c320a5ace8ef9295cb40018f53b9a1bc29ea84f16ddc2b,PodSandboxId:b5f07ed2ec364ee9893a3550df2d612fed6c86ed923e4a81d732270590f4d9e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718015059655946675,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b726p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 53f367ca-294c-4305-b2f4-54c5bb185ad9,},Annotations:map[string]string{io.kubernetes.container.hash: 213be43e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a040b0631871c5631fa7c1e5e37c49b6b4f9b576d1bbfe02db04511ebf3231a,PodSandboxId:64b5dd5d40e45f5aa8acbda35a4ed96ef9b876b7b5286e0ad969e9fee9290dd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718015047140617852,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-p48fw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c0f8acd7-1aba-434e-9c69-1e2108046b61,},Annotations:map[string]string{io.kubernetes.container.hash: 5cdb680c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2afd05933710d41bfef6803fb1ce14a4dab8e99f9da9efa653bf92cabc5f341,PodSandboxId:1e4c819ea56c652e6b7596fc826522badc103609ff8febd745ad32fc8fa4a464,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1718015025908778936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-r6b8r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fb5de79d-8042-41be-be73-bc7baa04070e,},Annotations:map[string]string{io.kubernetes.container.hash: 7fdcf30e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95bd0d26e99f7a5090ed919f987675e29905c7526b19ef6f6659706a74e16c0,PodSandboxId:e1218cb61ab9b10c2d48ea5259709d9f211df07d378eb957ffea925c16b950f1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718015025755820223,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w2sdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9d7eff5-0907-4068-a6da-10250cd49836,},Annotations:map[string]string{io.kubernetes.container.hash: 6393ee9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d323b52af89b5b92bab3b6f19c893aa65fe3177a46cf1454bd513381522b7,PodSandboxId:21444c38a2d27266b67340bde858e6ca2cd849b2b108ea0c7958a7e96447a333,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1718015023423319032,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-p8pv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: c0ef4698-bf75-4680-bcfa-95167d27a615,},Annotations:map[string]string{io.kubernetes.container.hash: 282b5fcd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:949c1eb00eb5e8487e589e8300238d291cfc98df4afb881fd561cf758cc78ef6,PodSandboxId:dae785f5d0a0f34d4612019df92ecf91213ba4898a357a7b65a2b10fc4b41d98,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd9
6de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1718015008088664561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-68cv5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fcfe5ad1-9315-4ca6-acfe-1a989c307a55,},Annotations:map[string]string{io.kubernetes.container.hash: f797413,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f53b89d046de317315a4195871d181a2ce396fd05e111ab9650e4efb84b51608,PodSandboxId:b1735eeeb605452e888eb5401196ed44b99504553e895e81479b30aa570a7a78,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-serve
r/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718015001717463769,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5lbmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9560fdac-7849-4123-9b3f-b4042539052c,},Annotations:map[string]string{io.kubernetes.container.hash: 27f580fc,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d4218e2abaf26e52a0b15b1daec5f8d45a248f3c62521a5bd620e6cb39ac51,PodSandboxId:7c251429316808727435b4d9092a1cb11bf9f
9a0bb64787ad073f709a6c94386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718014968932542999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93dd7c04-05d2-42a7-9762-bdb57fa30867,},Annotations:map[string]string{io.kubernetes.container.hash: 12e2039,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d51f47f6cffd10ed84592ac370dda69205489c5b11d84b22f2bb4811e54fb4,PodSandboxId:a6bb9746ad3545c7b750d9aa7b2d1480c282ab769205f5af0f
084f92aa3f85af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718014965442219523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rx46l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8198dacc-399a-413f-ba9c-1721544a3b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 612745aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854f803d622f6fd0c7bc120aed1cbbe06fc982cea0d1ba840b2ce765d2bbb8a,PodSandboxId:c243608ad14ca90465a6848bb87ab08e6cb01492a5045785e4f1a25a90e05e25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718014963381522056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d2baed-2c3e-4858-8479-918a31ae3835,},Annotations:map[string]string{io.kubernetes.container.hash: d55409fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:b88fdec6d7662e6f142e2c4782941d1b014d725747ad82975d2a3af2d75fbbac,PodSandboxId:61b23db931e05e46b90fe420f2edfdd903899b6855c49a060161dc9cefe5fb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718014943395397339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1774aac21a5451245d407877bf5c9b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 9ae77e1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0d847f6ad59
1ffc2d8685f17d719d307927d57b03dac385bc80de1cd722f69,PodSandboxId:930d4bdf4e6e5c97e64cb524f36fbfd135a3ef984f46eafc10705f3540a5d4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718014943341283465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f133aeec1950f817d39a425134e254,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1038be0f6076f6cad62c595b27f3c
fd98459c8cb35b6a6e90c6b673fad8e174,PodSandboxId:a40d1bbcc2adfbe1ac233ca4ad30f4a34b6db12b8adb16beda8e5b77f887f4b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718014943351427312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 672548f328c46b786476290618e6a09f,},Annotations:map[string]string{io.kubernetes.container.hash: 70606478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4c6832ab8029c82de1b8e68e8894ee49e06552c2cb431
ccc85768db866a227,PodSandboxId:850f971a165dae1a6d3908d49d28dbc88bfbdec1bd4c5b831b0a5c02a4c4a360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718014943335262502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e8d90f3cb5861300be12c4a927a655,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8
72154e-21ba-4dc0-a6f1-a06e0d0e0653 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.528840331Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=313aa4c0-e735-4107-87d2-43d202159c11 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.528913801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=313aa4c0-e735-4107-87d2-43d202159c11 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.530241383Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95f15bce-4579-4bc4-8f3b-cc9c98d884bf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.531420451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718015228531392621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95f15bce-4579-4bc4-8f3b-cc9c98d884bf name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.531997921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c741b6e9-0996-40fd-b707-ede26998e09b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.532110829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c741b6e9-0996-40fd-b707-ede26998e09b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.532535283Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc22745643d6853b725260aed1b923e4584d8b14d0021f8f9b42a046e6c006fe,PodSandboxId:7e193706ef9096110b87737cbf61070b4684f0d86473e3a97d0d532143683b26,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718015221412470525,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-d88fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01572e27-a714-4633-aeea-7e662365ce75,},Annotations:map[string]string{io.kubernetes.container.hash: afd70b2c,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac27835e9bc37e01e901f96ca22c17fd5d02c7d3cc7abe3fb4ed6575a85ef8b,PodSandboxId:eff348790a47f8fccfe3d62e61d16d70653ec33b3f6cf8419aa3b33179bdeda1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718015081034721462,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8105de8a-be57-47d3-ade8-89321c7029b7,},Annotations:map[string]string{io.kubern
etes.container.hash: 73535256,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa2d32e1d9b0154c320a5ace8ef9295cb40018f53b9a1bc29ea84f16ddc2b,PodSandboxId:b5f07ed2ec364ee9893a3550df2d612fed6c86ed923e4a81d732270590f4d9e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718015059655946675,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b726p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 53f367ca-294c-4305-b2f4-54c5bb185ad9,},Annotations:map[string]string{io.kubernetes.container.hash: 213be43e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a040b0631871c5631fa7c1e5e37c49b6b4f9b576d1bbfe02db04511ebf3231a,PodSandboxId:64b5dd5d40e45f5aa8acbda35a4ed96ef9b876b7b5286e0ad969e9fee9290dd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718015047140617852,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-p48fw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c0f8acd7-1aba-434e-9c69-1e2108046b61,},Annotations:map[string]string{io.kubernetes.container.hash: 5cdb680c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2afd05933710d41bfef6803fb1ce14a4dab8e99f9da9efa653bf92cabc5f341,PodSandboxId:1e4c819ea56c652e6b7596fc826522badc103609ff8febd745ad32fc8fa4a464,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1718015025908778936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-r6b8r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fb5de79d-8042-41be-be73-bc7baa04070e,},Annotations:map[string]string{io.kubernetes.container.hash: 7fdcf30e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95bd0d26e99f7a5090ed919f987675e29905c7526b19ef6f6659706a74e16c0,PodSandboxId:e1218cb61ab9b10c2d48ea5259709d9f211df07d378eb957ffea925c16b950f1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718015025755820223,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w2sdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9d7eff5-0907-4068-a6da-10250cd49836,},Annotations:map[string]string{io.kubernetes.container.hash: 6393ee9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d323b52af89b5b92bab3b6f19c893aa65fe3177a46cf1454bd513381522b7,PodSandboxId:21444c38a2d27266b67340bde858e6ca2cd849b2b108ea0c7958a7e96447a333,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1718015023423319032,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-p8pv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: c0ef4698-bf75-4680-bcfa-95167d27a615,},Annotations:map[string]string{io.kubernetes.container.hash: 282b5fcd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:949c1eb00eb5e8487e589e8300238d291cfc98df4afb881fd561cf758cc78ef6,PodSandboxId:dae785f5d0a0f34d4612019df92ecf91213ba4898a357a7b65a2b10fc4b41d98,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd9
6de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1718015008088664561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-68cv5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fcfe5ad1-9315-4ca6-acfe-1a989c307a55,},Annotations:map[string]string{io.kubernetes.container.hash: f797413,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f53b89d046de317315a4195871d181a2ce396fd05e111ab9650e4efb84b51608,PodSandboxId:b1735eeeb605452e888eb5401196ed44b99504553e895e81479b30aa570a7a78,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-serve
r/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718015001717463769,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5lbmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9560fdac-7849-4123-9b3f-b4042539052c,},Annotations:map[string]string{io.kubernetes.container.hash: 27f580fc,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d4218e2abaf26e52a0b15b1daec5f8d45a248f3c62521a5bd620e6cb39ac51,PodSandboxId:7c251429316808727435b4d9092a1cb11bf9f
9a0bb64787ad073f709a6c94386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718014968932542999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93dd7c04-05d2-42a7-9762-bdb57fa30867,},Annotations:map[string]string{io.kubernetes.container.hash: 12e2039,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d51f47f6cffd10ed84592ac370dda69205489c5b11d84b22f2bb4811e54fb4,PodSandboxId:a6bb9746ad3545c7b750d9aa7b2d1480c282ab769205f5af0f
084f92aa3f85af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718014965442219523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rx46l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8198dacc-399a-413f-ba9c-1721544a3b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 612745aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854f803d622f6fd0c7bc120aed1cbbe06fc982cea0d1ba840b2ce765d2bbb8a,PodSandboxId:c243608ad14ca90465a6848bb87ab08e6cb01492a5045785e4f1a25a90e05e25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718014963381522056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d2baed-2c3e-4858-8479-918a31ae3835,},Annotations:map[string]string{io.kubernetes.container.hash: d55409fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:b88fdec6d7662e6f142e2c4782941d1b014d725747ad82975d2a3af2d75fbbac,PodSandboxId:61b23db931e05e46b90fe420f2edfdd903899b6855c49a060161dc9cefe5fb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718014943395397339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1774aac21a5451245d407877bf5c9b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 9ae77e1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0d847f6ad59
1ffc2d8685f17d719d307927d57b03dac385bc80de1cd722f69,PodSandboxId:930d4bdf4e6e5c97e64cb524f36fbfd135a3ef984f46eafc10705f3540a5d4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718014943341283465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f133aeec1950f817d39a425134e254,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1038be0f6076f6cad62c595b27f3c
fd98459c8cb35b6a6e90c6b673fad8e174,PodSandboxId:a40d1bbcc2adfbe1ac233ca4ad30f4a34b6db12b8adb16beda8e5b77f887f4b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718014943351427312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 672548f328c46b786476290618e6a09f,},Annotations:map[string]string{io.kubernetes.container.hash: 70606478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4c6832ab8029c82de1b8e68e8894ee49e06552c2cb431
ccc85768db866a227,PodSandboxId:850f971a165dae1a6d3908d49d28dbc88bfbdec1bd4c5b831b0a5c02a4c4a360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718014943335262502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e8d90f3cb5861300be12c4a927a655,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7
41b6e9-0996-40fd-b707-ede26998e09b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.565836066Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25a9d00c-2e9b-4ab7-9c83-5e5e6c8fd964 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.565909267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25a9d00c-2e9b-4ab7-9c83-5e5e6c8fd964 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.567238658Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd4c2c74-e100-445e-8c27-923007800a4a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.568561496Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718015228568531875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd4c2c74-e100-445e-8c27-923007800a4a name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.569111494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8681c12a-503a-4e56-a4f3-24639ed3750d name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.569312690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8681c12a-503a-4e56-a4f3-24639ed3750d name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.569791075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc22745643d6853b725260aed1b923e4584d8b14d0021f8f9b42a046e6c006fe,PodSandboxId:7e193706ef9096110b87737cbf61070b4684f0d86473e3a97d0d532143683b26,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718015221412470525,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-d88fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01572e27-a714-4633-aeea-7e662365ce75,},Annotations:map[string]string{io.kubernetes.container.hash: afd70b2c,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac27835e9bc37e01e901f96ca22c17fd5d02c7d3cc7abe3fb4ed6575a85ef8b,PodSandboxId:eff348790a47f8fccfe3d62e61d16d70653ec33b3f6cf8419aa3b33179bdeda1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718015081034721462,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8105de8a-be57-47d3-ade8-89321c7029b7,},Annotations:map[string]string{io.kubern
etes.container.hash: 73535256,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa2d32e1d9b0154c320a5ace8ef9295cb40018f53b9a1bc29ea84f16ddc2b,PodSandboxId:b5f07ed2ec364ee9893a3550df2d612fed6c86ed923e4a81d732270590f4d9e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718015059655946675,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b726p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 53f367ca-294c-4305-b2f4-54c5bb185ad9,},Annotations:map[string]string{io.kubernetes.container.hash: 213be43e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a040b0631871c5631fa7c1e5e37c49b6b4f9b576d1bbfe02db04511ebf3231a,PodSandboxId:64b5dd5d40e45f5aa8acbda35a4ed96ef9b876b7b5286e0ad969e9fee9290dd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718015047140617852,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-p48fw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c0f8acd7-1aba-434e-9c69-1e2108046b61,},Annotations:map[string]string{io.kubernetes.container.hash: 5cdb680c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2afd05933710d41bfef6803fb1ce14a4dab8e99f9da9efa653bf92cabc5f341,PodSandboxId:1e4c819ea56c652e6b7596fc826522badc103609ff8febd745ad32fc8fa4a464,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1718015025908778936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-r6b8r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fb5de79d-8042-41be-be73-bc7baa04070e,},Annotations:map[string]string{io.kubernetes.container.hash: 7fdcf30e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95bd0d26e99f7a5090ed919f987675e29905c7526b19ef6f6659706a74e16c0,PodSandboxId:e1218cb61ab9b10c2d48ea5259709d9f211df07d378eb957ffea925c16b950f1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718015025755820223,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w2sdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9d7eff5-0907-4068-a6da-10250cd49836,},Annotations:map[string]string{io.kubernetes.container.hash: 6393ee9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d323b52af89b5b92bab3b6f19c893aa65fe3177a46cf1454bd513381522b7,PodSandboxId:21444c38a2d27266b67340bde858e6ca2cd849b2b108ea0c7958a7e96447a333,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1718015023423319032,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-p8pv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: c0ef4698-bf75-4680-bcfa-95167d27a615,},Annotations:map[string]string{io.kubernetes.container.hash: 282b5fcd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:949c1eb00eb5e8487e589e8300238d291cfc98df4afb881fd561cf758cc78ef6,PodSandboxId:dae785f5d0a0f34d4612019df92ecf91213ba4898a357a7b65a2b10fc4b41d98,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd9
6de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1718015008088664561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-68cv5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fcfe5ad1-9315-4ca6-acfe-1a989c307a55,},Annotations:map[string]string{io.kubernetes.container.hash: f797413,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f53b89d046de317315a4195871d181a2ce396fd05e111ab9650e4efb84b51608,PodSandboxId:b1735eeeb605452e888eb5401196ed44b99504553e895e81479b30aa570a7a78,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-serve
r/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718015001717463769,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5lbmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9560fdac-7849-4123-9b3f-b4042539052c,},Annotations:map[string]string{io.kubernetes.container.hash: 27f580fc,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d4218e2abaf26e52a0b15b1daec5f8d45a248f3c62521a5bd620e6cb39ac51,PodSandboxId:7c251429316808727435b4d9092a1cb11bf9f
9a0bb64787ad073f709a6c94386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718014968932542999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93dd7c04-05d2-42a7-9762-bdb57fa30867,},Annotations:map[string]string{io.kubernetes.container.hash: 12e2039,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d51f47f6cffd10ed84592ac370dda69205489c5b11d84b22f2bb4811e54fb4,PodSandboxId:a6bb9746ad3545c7b750d9aa7b2d1480c282ab769205f5af0f
084f92aa3f85af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718014965442219523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rx46l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8198dacc-399a-413f-ba9c-1721544a3b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 612745aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854f803d622f6fd0c7bc120aed1cbbe06fc982cea0d1ba840b2ce765d2bbb8a,PodSandboxId:c243608ad14ca90465a6848bb87ab08e6cb01492a5045785e4f1a25a90e05e25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718014963381522056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d2baed-2c3e-4858-8479-918a31ae3835,},Annotations:map[string]string{io.kubernetes.container.hash: d55409fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:b88fdec6d7662e6f142e2c4782941d1b014d725747ad82975d2a3af2d75fbbac,PodSandboxId:61b23db931e05e46b90fe420f2edfdd903899b6855c49a060161dc9cefe5fb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718014943395397339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1774aac21a5451245d407877bf5c9b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 9ae77e1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0d847f6ad59
1ffc2d8685f17d719d307927d57b03dac385bc80de1cd722f69,PodSandboxId:930d4bdf4e6e5c97e64cb524f36fbfd135a3ef984f46eafc10705f3540a5d4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718014943341283465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f133aeec1950f817d39a425134e254,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1038be0f6076f6cad62c595b27f3c
fd98459c8cb35b6a6e90c6b673fad8e174,PodSandboxId:a40d1bbcc2adfbe1ac233ca4ad30f4a34b6db12b8adb16beda8e5b77f887f4b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718014943351427312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 672548f328c46b786476290618e6a09f,},Annotations:map[string]string{io.kubernetes.container.hash: 70606478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4c6832ab8029c82de1b8e68e8894ee49e06552c2cb431
ccc85768db866a227,PodSandboxId:850f971a165dae1a6d3908d49d28dbc88bfbdec1bd4c5b831b0a5c02a4c4a360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718014943335262502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e8d90f3cb5861300be12c4a927a655,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86
81c12a-503a-4e56-a4f3-24639ed3750d name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.612560507Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8020c4c-565a-4ed2-bdfb-cef99c7cebc0 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.612686309Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8020c4c-565a-4ed2-bdfb-cef99c7cebc0 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.614008406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8db4a44a-dd12-4033-9650-c85cd3d6619d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.615934034Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718015228615895442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8db4a44a-dd12-4033-9650-c85cd3d6619d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.616725952Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bd08780-64f1-4211-9592-b0c50d1edc07 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.616804031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bd08780-64f1-4211-9592-b0c50d1edc07 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:27:08 addons-021732 crio[678]: time="2024-06-10 10:27:08.617342596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc22745643d6853b725260aed1b923e4584d8b14d0021f8f9b42a046e6c006fe,PodSandboxId:7e193706ef9096110b87737cbf61070b4684f0d86473e3a97d0d532143683b26,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718015221412470525,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-d88fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01572e27-a714-4633-aeea-7e662365ce75,},Annotations:map[string]string{io.kubernetes.container.hash: afd70b2c,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac27835e9bc37e01e901f96ca22c17fd5d02c7d3cc7abe3fb4ed6575a85ef8b,PodSandboxId:eff348790a47f8fccfe3d62e61d16d70653ec33b3f6cf8419aa3b33179bdeda1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718015081034721462,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8105de8a-be57-47d3-ade8-89321c7029b7,},Annotations:map[string]string{io.kubern
etes.container.hash: 73535256,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa2d32e1d9b0154c320a5ace8ef9295cb40018f53b9a1bc29ea84f16ddc2b,PodSandboxId:b5f07ed2ec364ee9893a3550df2d612fed6c86ed923e4a81d732270590f4d9e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718015059655946675,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b726p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 53f367ca-294c-4305-b2f4-54c5bb185ad9,},Annotations:map[string]string{io.kubernetes.container.hash: 213be43e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a040b0631871c5631fa7c1e5e37c49b6b4f9b576d1bbfe02db04511ebf3231a,PodSandboxId:64b5dd5d40e45f5aa8acbda35a4ed96ef9b876b7b5286e0ad969e9fee9290dd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718015047140617852,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-p48fw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c0f8acd7-1aba-434e-9c69-1e2108046b61,},Annotations:map[string]string{io.kubernetes.container.hash: 5cdb680c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2afd05933710d41bfef6803fb1ce14a4dab8e99f9da9efa653bf92cabc5f341,PodSandboxId:1e4c819ea56c652e6b7596fc826522badc103609ff8febd745ad32fc8fa4a464,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1718015025908778936,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-r6b8r,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: fb5de79d-8042-41be-be73-bc7baa04070e,},Annotations:map[string]string{io.kubernetes.container.hash: 7fdcf30e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e95bd0d26e99f7a5090ed919f987675e29905c7526b19ef6f6659706a74e16c0,PodSandboxId:e1218cb61ab9b10c2d48ea5259709d9f211df07d378eb957ffea925c16b950f1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1718015025755820223,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-w2sdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d9d7eff5-0907-4068-a6da-10250cd49836,},Annotations:map[string]string{io.kubernetes.container.hash: 6393ee9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d323b52af89b5b92bab3b6f19c893aa65fe3177a46cf1454bd513381522b7,PodSandboxId:21444c38a2d27266b67340bde858e6ca2cd849b2b108ea0c7958a7e96447a333,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1718015023423319032,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-p8pv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: c0ef4698-bf75-4680-bcfa-95167d27a615,},Annotations:map[string]string{io.kubernetes.container.hash: 282b5fcd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:949c1eb00eb5e8487e589e8300238d291cfc98df4afb881fd561cf758cc78ef6,PodSandboxId:dae785f5d0a0f34d4612019df92ecf91213ba4898a357a7b65a2b10fc4b41d98,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd9
6de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1718015008088664561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-68cv5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fcfe5ad1-9315-4ca6-acfe-1a989c307a55,},Annotations:map[string]string{io.kubernetes.container.hash: f797413,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f53b89d046de317315a4195871d181a2ce396fd05e111ab9650e4efb84b51608,PodSandboxId:b1735eeeb605452e888eb5401196ed44b99504553e895e81479b30aa570a7a78,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-serve
r/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718015001717463769,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5lbmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9560fdac-7849-4123-9b3f-b4042539052c,},Annotations:map[string]string{io.kubernetes.container.hash: 27f580fc,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d4218e2abaf26e52a0b15b1daec5f8d45a248f3c62521a5bd620e6cb39ac51,PodSandboxId:7c251429316808727435b4d9092a1cb11bf9f
9a0bb64787ad073f709a6c94386,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718014968932542999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93dd7c04-05d2-42a7-9762-bdb57fa30867,},Annotations:map[string]string{io.kubernetes.container.hash: 12e2039,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d51f47f6cffd10ed84592ac370dda69205489c5b11d84b22f2bb4811e54fb4,PodSandboxId:a6bb9746ad3545c7b750d9aa7b2d1480c282ab769205f5af0f
084f92aa3f85af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718014965442219523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rx46l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8198dacc-399a-413f-ba9c-1721544a3b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 612745aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: F
ile,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854f803d622f6fd0c7bc120aed1cbbe06fc982cea0d1ba840b2ce765d2bbb8a,PodSandboxId:c243608ad14ca90465a6848bb87ab08e6cb01492a5045785e4f1a25a90e05e25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718014963381522056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d2baed-2c3e-4858-8479-918a31ae3835,},Annotations:map[string]string{io.kubernetes.container.hash: d55409fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termination
GracePeriod: 30,},},&Container{Id:b88fdec6d7662e6f142e2c4782941d1b014d725747ad82975d2a3af2d75fbbac,PodSandboxId:61b23db931e05e46b90fe420f2edfdd903899b6855c49a060161dc9cefe5fb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718014943395397339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1774aac21a5451245d407877bf5c9b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 9ae77e1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0d847f6ad59
1ffc2d8685f17d719d307927d57b03dac385bc80de1cd722f69,PodSandboxId:930d4bdf4e6e5c97e64cb524f36fbfd135a3ef984f46eafc10705f3540a5d4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718014943341283465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f133aeec1950f817d39a425134e254,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1038be0f6076f6cad62c595b27f3c
fd98459c8cb35b6a6e90c6b673fad8e174,PodSandboxId:a40d1bbcc2adfbe1ac233ca4ad30f4a34b6db12b8adb16beda8e5b77f887f4b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718014943351427312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 672548f328c46b786476290618e6a09f,},Annotations:map[string]string{io.kubernetes.container.hash: 70606478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4c6832ab8029c82de1b8e68e8894ee49e06552c2cb431
ccc85768db866a227,PodSandboxId:850f971a165dae1a6d3908d49d28dbc88bfbdec1bd4c5b831b0a5c02a4c4a360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718014943335262502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e8d90f3cb5861300be12c4a927a655,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b
d08780-64f1-4211-9592-b0c50d1edc07 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dc22745643d68       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   7e193706ef909       hello-world-app-86c47465fc-d88fw
	4ac27835e9bc3       docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa                              2 minutes ago       Running             nginx                     0                   eff348790a47f       nginx
	e5faa2d32e1d9       ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5                        2 minutes ago       Running             headlamp                  0                   b5f07ed2ec364       headlamp-7fc69f7444-b726p
	2a040b0631871       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   64b5dd5d40e45       gcp-auth-5db96cd9b4-p48fw
	c2afd05933710       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              patch                     0                   1e4c819ea56c6       ingress-nginx-admission-patch-r6b8r
	e95bd0d26e99f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   3 minutes ago       Exited              create                    0                   e1218cb61ab9b       ingress-nginx-admission-create-w2sdf
	d19d323b52af8       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   21444c38a2d27       yakd-dashboard-5ddbf7d777-p8pv2
	949c1eb00eb5e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   dae785f5d0a0f       local-path-provisioner-8d985888d-68cv5
	f53b89d046de3       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        3 minutes ago       Running             metrics-server            0                   b1735eeeb6054       metrics-server-c59844bb4-5lbmz
	80d4218e2abaf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   7c25142931680       storage-provisioner
	12d51f47f6cff       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             4 minutes ago       Running             coredns                   0                   a6bb9746ad354       coredns-7db6d8ff4d-rx46l
	8854f803d622f       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                             4 minutes ago       Running             kube-proxy                0                   c243608ad14ca       kube-proxy-7846w
	b88fdec6d7662       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             4 minutes ago       Running             etcd                      0                   61b23db931e05       etcd-addons-021732
	b1038be0f6076       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                             4 minutes ago       Running             kube-apiserver            0                   a40d1bbcc2adf       kube-apiserver-addons-021732
	3f0d847f6ad59       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                             4 minutes ago       Running             kube-scheduler            0                   930d4bdf4e6e5       kube-scheduler-addons-021732
	1e4c6832ab802       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                             4 minutes ago       Running             kube-controller-manager   0                   850f971a165da       kube-controller-manager-addons-021732
	
	
	==> coredns [12d51f47f6cffd10ed84592ac370dda69205489c5b11d84b22f2bb4811e54fb4] <==
	[INFO] 10.244.0.8:40069 - 46801 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000229563s
	[INFO] 10.244.0.8:54549 - 49709 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100756s
	[INFO] 10.244.0.8:54549 - 21032 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000168558s
	[INFO] 10.244.0.8:46362 - 20939 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094505s
	[INFO] 10.244.0.8:46362 - 35028 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122818s
	[INFO] 10.244.0.8:53788 - 63092 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000136115s
	[INFO] 10.244.0.8:53788 - 45430 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108752s
	[INFO] 10.244.0.8:42874 - 34342 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070314s
	[INFO] 10.244.0.8:42874 - 49973 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000087667s
	[INFO] 10.244.0.8:53948 - 15404 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062492s
	[INFO] 10.244.0.8:53948 - 49705 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114457s
	[INFO] 10.244.0.8:56589 - 27141 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088277s
	[INFO] 10.244.0.8:56589 - 18439 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028869s
	[INFO] 10.244.0.8:46806 - 29265 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054209s
	[INFO] 10.244.0.8:46806 - 21843 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108584s
	[INFO] 10.244.0.22:36587 - 53548 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00041039s
	[INFO] 10.244.0.22:49493 - 23515 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000183696s
	[INFO] 10.244.0.22:37858 - 42634 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127859s
	[INFO] 10.244.0.22:38164 - 1990 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000088632s
	[INFO] 10.244.0.22:55093 - 18374 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000153452s
	[INFO] 10.244.0.22:33104 - 21697 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088884s
	[INFO] 10.244.0.22:33110 - 32901 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000760966s
	[INFO] 10.244.0.22:35001 - 54488 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001532762s
	[INFO] 10.244.0.25:38715 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000368649s
	[INFO] 10.244.0.25:44760 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000271927s
	
	
	==> describe nodes <==
	Name:               addons-021732
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-021732
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=addons-021732
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T10_22_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-021732
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:22:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-021732
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:27:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:25:32 +0000   Mon, 10 Jun 2024 10:22:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:25:32 +0000   Mon, 10 Jun 2024 10:22:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:25:32 +0000   Mon, 10 Jun 2024 10:22:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:25:32 +0000   Mon, 10 Jun 2024 10:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    addons-021732
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f47abb3d6e54cdd89e31e075ba7516b
	  System UUID:                2f47abb3-d6e5-4cdd-89e3-1e075ba7516b
	  Boot ID:                    fe81519d-bfc4-45c9-a1b7-f84e0a5c322a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-d88fw          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  gcp-auth                    gcp-auth-5db96cd9b4-p48fw                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  headlamp                    headlamp-7fc69f7444-b726p                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 coredns-7db6d8ff4d-rx46l                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m26s
	  kube-system                 etcd-addons-021732                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-apiserver-addons-021732              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-controller-manager-addons-021732     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 kube-proxy-7846w                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-scheduler-addons-021732              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	  kube-system                 metrics-server-c59844bb4-5lbmz            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m21s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  local-path-storage          local-path-provisioner-8d985888d-68cv5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-p8pv2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m24s  kube-proxy       
	  Normal  Starting                 4m40s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m40s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m40s  kubelet          Node addons-021732 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s  kubelet          Node addons-021732 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s  kubelet          Node addons-021732 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m39s  kubelet          Node addons-021732 status is now: NodeReady
	  Normal  RegisteredNode           4m27s  node-controller  Node addons-021732 event: Registered Node addons-021732 in Controller
	
	
	==> dmesg <==
	[  +5.136093] kauditd_printk_skb: 115 callbacks suppressed
	[  +5.003401] kauditd_printk_skb: 140 callbacks suppressed
	[  +5.234725] kauditd_printk_skb: 56 callbacks suppressed
	[Jun10 10:23] kauditd_printk_skb: 7 callbacks suppressed
	[ +16.787619] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.817677] kauditd_printk_skb: 4 callbacks suppressed
	[ +17.418807] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.099575] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.058301] kauditd_printk_skb: 76 callbacks suppressed
	[Jun10 10:24] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.476449] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.163474] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.006496] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.006368] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.486687] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.027423] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.723339] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.040751] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.683552] kauditd_printk_skb: 23 callbacks suppressed
	[Jun10 10:25] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.043017] kauditd_printk_skb: 2 callbacks suppressed
	[ +24.063671] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.228544] kauditd_printk_skb: 33 callbacks suppressed
	[Jun10 10:26] kauditd_printk_skb: 6 callbacks suppressed
	[Jun10 10:27] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b88fdec6d7662e6f142e2c4782941d1b014d725747ad82975d2a3af2d75fbbac] <==
	{"level":"warn","ts":"2024-06-10T10:23:51.458386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.905865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T10:23:51.458418Z","caller":"traceutil/trace.go:171","msg":"trace[111553454] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1021; }","duration":"217.952098ms","start":"2024-06-10T10:23:51.24046Z","end":"2024-06-10T10:23:51.458413Z","steps":["trace[111553454] 'agreement among raft nodes before linearized reading'  (duration: 217.910523ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:23:58.640739Z","caller":"traceutil/trace.go:171","msg":"trace[4127607] linearizableReadLoop","detail":"{readStateIndex:1114; appliedIndex:1113; }","duration":"353.373488ms","start":"2024-06-10T10:23:58.287294Z","end":"2024-06-10T10:23:58.640667Z","steps":["trace[4127607] 'read index received'  (duration: 353.228665ms)","trace[4127607] 'applied index is now lower than readState.Index'  (duration: 144.396µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T10:23:58.640972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.697976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-5lbmz.17d79d8aeb5d3716\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-06-10T10:23:58.641016Z","caller":"traceutil/trace.go:171","msg":"trace[1238059756] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-5lbmz.17d79d8aeb5d3716; range_end:; response_count:1; response_revision:1080; }","duration":"353.773314ms","start":"2024-06-10T10:23:58.28723Z","end":"2024-06-10T10:23:58.641004Z","steps":["trace[1238059756] 'agreement among raft nodes before linearized reading'  (duration: 353.627312ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:23:58.641039Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:23:58.287216Z","time spent":"353.81838ms","remote":"127.0.0.1:58184","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":1,"response size":836,"request content":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-5lbmz.17d79d8aeb5d3716\" "}
	{"level":"warn","ts":"2024-06-10T10:23:58.641135Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.387541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85556"}
	{"level":"info","ts":"2024-06-10T10:23:58.64121Z","caller":"traceutil/trace.go:171","msg":"trace[586710287] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1080; }","duration":"284.438087ms","start":"2024-06-10T10:23:58.356716Z","end":"2024-06-10T10:23:58.641154Z","steps":["trace[586710287] 'agreement among raft nodes before linearized reading'  (duration: 284.287748ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:23:58.641279Z","caller":"traceutil/trace.go:171","msg":"trace[1788749010] transaction","detail":"{read_only:false; response_revision:1080; number_of_response:1; }","duration":"372.800219ms","start":"2024-06-10T10:23:58.268467Z","end":"2024-06-10T10:23:58.641268Z","steps":["trace[1788749010] 'process raft request'  (duration: 372.094944ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:23:58.641337Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:23:58.268451Z","time spent":"372.847229ms","remote":"127.0.0.1:58282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1065 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-06-10T10:23:58.641404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.037911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/yakd-dashboard-5ddbf7d777-p8pv2\" ","response":"range_response_count:1 size:4502"}
	{"level":"info","ts":"2024-06-10T10:23:58.641426Z","caller":"traceutil/trace.go:171","msg":"trace[1900020226] range","detail":"{range_begin:/registry/pods/yakd-dashboard/yakd-dashboard-5ddbf7d777-p8pv2; range_end:; response_count:1; response_revision:1080; }","duration":"176.081274ms","start":"2024-06-10T10:23:58.465338Z","end":"2024-06-10T10:23:58.641419Z","steps":["trace[1900020226] 'agreement among raft nodes before linearized reading'  (duration: 176.025316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:23:58.641527Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.68431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-06-10T10:23:58.641541Z","caller":"traceutil/trace.go:171","msg":"trace[1091279413] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1080; }","duration":"258.717387ms","start":"2024-06-10T10:23:58.382819Z","end":"2024-06-10T10:23:58.641537Z","steps":["trace[1091279413] 'agreement among raft nodes before linearized reading'  (duration: 258.668932ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:24:13.18708Z","caller":"traceutil/trace.go:171","msg":"trace[260758174] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"198.938207ms","start":"2024-06-10T10:24:12.988118Z","end":"2024-06-10T10:24:13.187057Z","steps":["trace[260758174] 'process raft request'  (duration: 198.721823ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:24:18.352702Z","caller":"traceutil/trace.go:171","msg":"trace[351633498] linearizableReadLoop","detail":"{readStateIndex:1252; appliedIndex:1251; }","duration":"111.959486ms","start":"2024-06-10T10:24:18.240716Z","end":"2024-06-10T10:24:18.352676Z","steps":["trace[351633498] 'read index received'  (duration: 111.584562ms)","trace[351633498] 'applied index is now lower than readState.Index'  (duration: 374.168µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T10:24:18.352922Z","caller":"traceutil/trace.go:171","msg":"trace[641990916] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"155.414016ms","start":"2024-06-10T10:24:18.197499Z","end":"2024-06-10T10:24:18.352913Z","steps":["trace[641990916] 'process raft request'  (duration: 154.998039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:24:18.353602Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.828137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T10:24:18.353694Z","caller":"traceutil/trace.go:171","msg":"trace[1273084539] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1214; }","duration":"112.994761ms","start":"2024-06-10T10:24:18.24069Z","end":"2024-06-10T10:24:18.353685Z","steps":["trace[1273084539] 'agreement among raft nodes before linearized reading'  (duration: 112.835385ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:24:38.715488Z","caller":"traceutil/trace.go:171","msg":"trace[1501931927] linearizableReadLoop","detail":"{readStateIndex:1436; appliedIndex:1435; }","duration":"100.920377ms","start":"2024-06-10T10:24:38.614555Z","end":"2024-06-10T10:24:38.715475Z","steps":["trace[1501931927] 'read index received'  (duration: 100.797798ms)","trace[1501931927] 'applied index is now lower than readState.Index'  (duration: 122.175µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T10:24:38.715679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.127584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-06-10T10:24:38.715702Z","caller":"traceutil/trace.go:171","msg":"trace[1480646814] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1387; }","duration":"101.195474ms","start":"2024-06-10T10:24:38.6145Z","end":"2024-06-10T10:24:38.715696Z","steps":["trace[1480646814] 'agreement among raft nodes before linearized reading'  (duration: 101.0418ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:24:38.715987Z","caller":"traceutil/trace.go:171","msg":"trace[1456422452] transaction","detail":"{read_only:false; response_revision:1387; number_of_response:1; }","duration":"322.914736ms","start":"2024-06-10T10:24:38.393052Z","end":"2024-06-10T10:24:38.715967Z","steps":["trace[1456422452] 'process raft request'  (duration: 322.346736ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:24:38.716157Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:24:38.393037Z","time spent":"322.984558ms","remote":"127.0.0.1:58396","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1359 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-06-10T10:25:19.444156Z","caller":"traceutil/trace.go:171","msg":"trace[1738130015] transaction","detail":"{read_only:false; response_revision:1528; number_of_response:1; }","duration":"115.328245ms","start":"2024-06-10T10:25:19.328808Z","end":"2024-06-10T10:25:19.444136Z","steps":["trace[1738130015] 'process raft request'  (duration: 115.175735ms)"],"step_count":1}
	
	
	==> gcp-auth [2a040b0631871c5631fa7c1e5e37c49b6b4f9b576d1bbfe02db04511ebf3231a] <==
	2024/06/10 10:24:07 GCP Auth Webhook started!
	2024/06/10 10:24:13 Ready to marshal response ...
	2024/06/10 10:24:13 Ready to write response ...
	2024/06/10 10:24:13 Ready to marshal response ...
	2024/06/10 10:24:13 Ready to write response ...
	2024/06/10 10:24:13 Ready to marshal response ...
	2024/06/10 10:24:13 Ready to write response ...
	2024/06/10 10:24:17 Ready to marshal response ...
	2024/06/10 10:24:17 Ready to write response ...
	2024/06/10 10:24:23 Ready to marshal response ...
	2024/06/10 10:24:23 Ready to write response ...
	2024/06/10 10:24:36 Ready to marshal response ...
	2024/06/10 10:24:36 Ready to write response ...
	2024/06/10 10:24:42 Ready to marshal response ...
	2024/06/10 10:24:42 Ready to write response ...
	2024/06/10 10:24:42 Ready to marshal response ...
	2024/06/10 10:24:42 Ready to write response ...
	2024/06/10 10:24:53 Ready to marshal response ...
	2024/06/10 10:24:53 Ready to write response ...
	2024/06/10 10:25:11 Ready to marshal response ...
	2024/06/10 10:25:11 Ready to write response ...
	2024/06/10 10:25:34 Ready to marshal response ...
	2024/06/10 10:25:34 Ready to write response ...
	2024/06/10 10:26:57 Ready to marshal response ...
	2024/06/10 10:26:57 Ready to write response ...
	
	
	==> kernel <==
	 10:27:09 up 5 min,  0 users,  load average: 0.87, 1.34, 0.70
	Linux addons-021732 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b1038be0f6076f6cad62c595b27f3cfd98459c8cb35b6a6e90c6b673fad8e174] <==
	I0610 10:24:29.321657       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 10:24:29.322306       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 10:24:29.322330       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 10:24:29.322792       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 10:24:33.334507       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 10:24:33.334559       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0610 10:24:33.334774       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.136.138:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.136.138:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	I0610 10:24:33.342607       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0610 10:24:36.686816       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0610 10:24:36.875868       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.36.41"}
	I0610 10:25:26.795742       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0610 10:25:52.081892       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 10:25:52.082045       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 10:25:52.152777       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 10:25:52.152896       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 10:25:52.197518       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 10:25:52.197565       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 10:25:52.247122       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 10:25:52.247206       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0610 10:25:53.169875       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0610 10:25:53.248121       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0610 10:25:53.262952       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0610 10:26:57.976339       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.133.237"}
	E0610 10:27:00.916550       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [1e4c6832ab8029c82de1b8e68e8894ee49e06552c2cb431ccc85768db866a227] <==
	I0610 10:26:11.867907       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 10:26:11.868123       1 shared_informer.go:320] Caches are synced for garbage collector
	W0610 10:26:13.301879       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:26:13.301996       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:26:22.607089       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:26:22.607298       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:26:26.381144       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:26:26.381225       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:26:28.372362       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:26:28.372409       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:26:28.902335       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:26:28.902377       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:26:54.321146       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:26:54.321339       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:26:55.405146       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:26:55.405238       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0610 10:26:57.829489       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="51.436118ms"
	I0610 10:26:57.844227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="14.696055ms"
	I0610 10:26:57.844583       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="48.571µs"
	I0610 10:26:57.846796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="29.753µs"
	I0610 10:27:00.717371       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0610 10:27:00.723572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="5.931µs"
	I0610 10:27:00.734648       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0610 10:27:01.775529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="6.115985ms"
	I0610 10:27:01.775877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="63.061µs"
	
	
	==> kube-proxy [8854f803d622f6fd0c7bc120aed1cbbe06fc982cea0d1ba840b2ce765d2bbb8a] <==
	I0610 10:22:44.383213       1 server_linux.go:69] "Using iptables proxy"
	I0610 10:22:44.413964       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.244"]
	I0610 10:22:44.480317       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 10:22:44.480364       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 10:22:44.480379       1 server_linux.go:165] "Using iptables Proxier"
	I0610 10:22:44.483590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 10:22:44.483836       1 server.go:872] "Version info" version="v1.30.1"
	I0610 10:22:44.483863       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:22:44.484874       1 config.go:192] "Starting service config controller"
	I0610 10:22:44.484883       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 10:22:44.484909       1 config.go:101] "Starting endpoint slice config controller"
	I0610 10:22:44.484913       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 10:22:44.490872       1 config.go:319] "Starting node config controller"
	I0610 10:22:44.490896       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 10:22:44.585703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 10:22:44.585785       1 shared_informer.go:320] Caches are synced for service config
	I0610 10:22:44.591142       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3f0d847f6ad591ffc2d8685f17d719d307927d57b03dac385bc80de1cd722f69] <==
	W0610 10:22:25.955458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 10:22:25.955492       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 10:22:26.784539       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 10:22:26.784656       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 10:22:26.965131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 10:22:26.965193       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 10:22:26.978741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 10:22:26.978792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 10:22:26.989317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 10:22:26.989360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 10:22:27.003398       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 10:22:27.004045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 10:22:27.062033       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 10:22:27.062080       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 10:22:27.062286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 10:22:27.062310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 10:22:27.081235       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 10:22:27.081377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 10:22:27.181084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 10:22:27.181202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 10:22:27.181622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 10:22:27.181697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 10:22:27.409810       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 10:22:27.409855       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 10:22:29.648999       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 10:26:57 addons-021732 kubelet[1270]: I0610 10:26:57.822262    1270 memory_manager.go:354] "RemoveStaleState removing state" podUID="9285d121-5350-4eb2-a327-bafaf090e4d9" containerName="csi-external-health-monitor-controller"
	Jun 10 10:26:57 addons-021732 kubelet[1270]: I0610 10:26:57.822267    1270 memory_manager.go:354] "RemoveStaleState removing state" podUID="9285d121-5350-4eb2-a327-bafaf090e4d9" containerName="node-driver-registrar"
	Jun 10 10:26:57 addons-021732 kubelet[1270]: I0610 10:26:57.853357    1270 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/01572e27-a714-4633-aeea-7e662365ce75-gcp-creds\") pod \"hello-world-app-86c47465fc-d88fw\" (UID: \"01572e27-a714-4633-aeea-7e662365ce75\") " pod="default/hello-world-app-86c47465fc-d88fw"
	Jun 10 10:26:57 addons-021732 kubelet[1270]: I0610 10:26:57.853429    1270 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mds67\" (UniqueName: \"kubernetes.io/projected/01572e27-a714-4633-aeea-7e662365ce75-kube-api-access-mds67\") pod \"hello-world-app-86c47465fc-d88fw\" (UID: \"01572e27-a714-4633-aeea-7e662365ce75\") " pod="default/hello-world-app-86c47465fc-d88fw"
	Jun 10 10:26:58 addons-021732 kubelet[1270]: I0610 10:26:58.960254    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q799p\" (UniqueName: \"kubernetes.io/projected/3e396de4-1f67-49cc-8b15-180ef259e715-kube-api-access-q799p\") pod \"3e396de4-1f67-49cc-8b15-180ef259e715\" (UID: \"3e396de4-1f67-49cc-8b15-180ef259e715\") "
	Jun 10 10:26:58 addons-021732 kubelet[1270]: I0610 10:26:58.963134    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e396de4-1f67-49cc-8b15-180ef259e715-kube-api-access-q799p" (OuterVolumeSpecName: "kube-api-access-q799p") pod "3e396de4-1f67-49cc-8b15-180ef259e715" (UID: "3e396de4-1f67-49cc-8b15-180ef259e715"). InnerVolumeSpecName "kube-api-access-q799p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 10 10:26:59 addons-021732 kubelet[1270]: I0610 10:26:59.060578    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q799p\" (UniqueName: \"kubernetes.io/projected/3e396de4-1f67-49cc-8b15-180ef259e715-kube-api-access-q799p\") on node \"addons-021732\" DevicePath \"\""
	Jun 10 10:26:59 addons-021732 kubelet[1270]: I0610 10:26:59.740093    1270 scope.go:117] "RemoveContainer" containerID="6319f56deadb276aeb0193adb90b418fb365ebf1cc21c437cae15a4bf62122db"
	Jun 10 10:26:59 addons-021732 kubelet[1270]: I0610 10:26:59.770808    1270 scope.go:117] "RemoveContainer" containerID="6319f56deadb276aeb0193adb90b418fb365ebf1cc21c437cae15a4bf62122db"
	Jun 10 10:26:59 addons-021732 kubelet[1270]: E0610 10:26:59.772437    1270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6319f56deadb276aeb0193adb90b418fb365ebf1cc21c437cae15a4bf62122db\": container with ID starting with 6319f56deadb276aeb0193adb90b418fb365ebf1cc21c437cae15a4bf62122db not found: ID does not exist" containerID="6319f56deadb276aeb0193adb90b418fb365ebf1cc21c437cae15a4bf62122db"
	Jun 10 10:26:59 addons-021732 kubelet[1270]: I0610 10:26:59.773283    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6319f56deadb276aeb0193adb90b418fb365ebf1cc21c437cae15a4bf62122db"} err="failed to get container status \"6319f56deadb276aeb0193adb90b418fb365ebf1cc21c437cae15a4bf62122db\": rpc error: code = NotFound desc = could not find container \"6319f56deadb276aeb0193adb90b418fb365ebf1cc21c437cae15a4bf62122db\": container with ID starting with 6319f56deadb276aeb0193adb90b418fb365ebf1cc21c437cae15a4bf62122db not found: ID does not exist"
	Jun 10 10:27:00 addons-021732 kubelet[1270]: I0610 10:27:00.723790    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e396de4-1f67-49cc-8b15-180ef259e715" path="/var/lib/kubelet/pods/3e396de4-1f67-49cc-8b15-180ef259e715/volumes"
	Jun 10 10:27:02 addons-021732 kubelet[1270]: I0610 10:27:02.723002    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d7eff5-0907-4068-a6da-10250cd49836" path="/var/lib/kubelet/pods/d9d7eff5-0907-4068-a6da-10250cd49836/volumes"
	Jun 10 10:27:02 addons-021732 kubelet[1270]: I0610 10:27:02.723913    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb5de79d-8042-41be-be73-bc7baa04070e" path="/var/lib/kubelet/pods/fb5de79d-8042-41be-be73-bc7baa04070e/volumes"
	Jun 10 10:27:03 addons-021732 kubelet[1270]: I0610 10:27:03.993993    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c28d1926-7b01-40d1-91c2-e782bb035349-webhook-cert\") pod \"c28d1926-7b01-40d1-91c2-e782bb035349\" (UID: \"c28d1926-7b01-40d1-91c2-e782bb035349\") "
	Jun 10 10:27:03 addons-021732 kubelet[1270]: I0610 10:27:03.994059    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtrk8\" (UniqueName: \"kubernetes.io/projected/c28d1926-7b01-40d1-91c2-e782bb035349-kube-api-access-xtrk8\") pod \"c28d1926-7b01-40d1-91c2-e782bb035349\" (UID: \"c28d1926-7b01-40d1-91c2-e782bb035349\") "
	Jun 10 10:27:04 addons-021732 kubelet[1270]: I0610 10:27:04.000283    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c28d1926-7b01-40d1-91c2-e782bb035349-kube-api-access-xtrk8" (OuterVolumeSpecName: "kube-api-access-xtrk8") pod "c28d1926-7b01-40d1-91c2-e782bb035349" (UID: "c28d1926-7b01-40d1-91c2-e782bb035349"). InnerVolumeSpecName "kube-api-access-xtrk8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 10 10:27:04 addons-021732 kubelet[1270]: I0610 10:27:04.000672    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c28d1926-7b01-40d1-91c2-e782bb035349-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c28d1926-7b01-40d1-91c2-e782bb035349" (UID: "c28d1926-7b01-40d1-91c2-e782bb035349"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 10:27:04 addons-021732 kubelet[1270]: I0610 10:27:04.095075    1270 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c28d1926-7b01-40d1-91c2-e782bb035349-webhook-cert\") on node \"addons-021732\" DevicePath \"\""
	Jun 10 10:27:04 addons-021732 kubelet[1270]: I0610 10:27:04.095108    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xtrk8\" (UniqueName: \"kubernetes.io/projected/c28d1926-7b01-40d1-91c2-e782bb035349-kube-api-access-xtrk8\") on node \"addons-021732\" DevicePath \"\""
	Jun 10 10:27:04 addons-021732 kubelet[1270]: I0610 10:27:04.722357    1270 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c28d1926-7b01-40d1-91c2-e782bb035349" path="/var/lib/kubelet/pods/c28d1926-7b01-40d1-91c2-e782bb035349/volumes"
	Jun 10 10:27:04 addons-021732 kubelet[1270]: I0610 10:27:04.772038    1270 scope.go:117] "RemoveContainer" containerID="2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab"
	Jun 10 10:27:04 addons-021732 kubelet[1270]: I0610 10:27:04.785541    1270 scope.go:117] "RemoveContainer" containerID="2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab"
	Jun 10 10:27:04 addons-021732 kubelet[1270]: E0610 10:27:04.786091    1270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab\": container with ID starting with 2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab not found: ID does not exist" containerID="2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab"
	Jun 10 10:27:04 addons-021732 kubelet[1270]: I0610 10:27:04.786277    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab"} err="failed to get container status \"2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab\": rpc error: code = NotFound desc = could not find container \"2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab\": container with ID starting with 2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab not found: ID does not exist"
	
	
	==> storage-provisioner [80d4218e2abaf26e52a0b15b1daec5f8d45a248f3c62521a5bd620e6cb39ac51] <==
	I0610 10:22:50.581254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 10:22:50.661037       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 10:22:50.661142       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 10:22:50.677924       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 10:22:50.678303       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-021732_e4916fdc-bb70-4fc6-a576-6defff5c5bc4!
	I0610 10:22:50.684718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6232844-0476-41f5-b9df-66a2659aee82", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-021732_e4916fdc-bb70-4fc6-a576-6defff5c5bc4 became leader
	I0610 10:22:50.779100       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-021732_e4916fdc-bb70-4fc6-a576-6defff5c5bc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-021732 -n addons-021732
helpers_test.go:261: (dbg) Run:  kubectl --context addons-021732 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (153.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (358.94s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.499406ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-5lbmz" [9560fdac-7849-4123-9b3f-b4042539052c] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
helpers_test.go:344: "metrics-server-c59844bb4-5lbmz" [9560fdac-7849-4123-9b3f-b4042539052c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.008138504s
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (52.173421ms)

                                                
                                                
** stderr ** 
	error: Metrics API not available

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (72.003624ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-021732, age: 2m7.624503026s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (69.469904ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/etcd-addons-021732, age: 2m13.367379149s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (72.170817ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rx46l, age: 2m9.097396498s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (64.627547ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rx46l, age: 2m19.614237877s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (66.935619ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rx46l, age: 2m35.569918245s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (66.938294ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rx46l, age: 3m9.65424915s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (66.887896ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rx46l, age: 3m51.289740649s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (66.267616ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rx46l, age: 4m17.934776532s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (65.885735ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rx46l, age: 5m27.367570031s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (62.691293ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rx46l, age: 6m10.984483426s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-021732 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-021732 top pods -n kube-system: exit status 1 (64.328396ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-rx46l, age: 7m40.894067628s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-021732 -n addons-021732
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-021732 logs -n 25: (1.340521482s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-938190                                                                     | download-only-938190 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-996636                                                                     | download-only-996636 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-938190                                                                     | download-only-938190 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-775609 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | binary-mirror-775609                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34103                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-775609                                                                     | binary-mirror-775609 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| addons  | disable dashboard -p                                                                        | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | addons-021732                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | addons-021732                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-021732 --wait=true                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | -p addons-021732                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | addons-021732                                                                               |                      |         |         |                     |                     |
	| addons  | addons-021732 addons disable                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-021732 ip                                                                            | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	| addons  | addons-021732 addons disable                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | -p addons-021732                                                                            |                      |         |         |                     |                     |
	| ssh     | addons-021732 ssh curl -s                                                                   | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-021732 ssh cat                                                                       | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | /opt/local-path-provisioner/pvc-be3afae5-1392-4466-a1db-28b1c658ba01_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-021732 addons disable                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:24 UTC | 10 Jun 24 10:24 UTC |
	|         | addons-021732                                                                               |                      |         |         |                     |                     |
	| addons  | addons-021732 addons                                                                        | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:25 UTC | 10 Jun 24 10:25 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-021732 addons                                                                        | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:25 UTC | 10 Jun 24 10:25 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-021732 ip                                                                            | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:26 UTC | 10 Jun 24 10:26 UTC |
	| addons  | addons-021732 addons disable                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:26 UTC | 10 Jun 24 10:26 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-021732 addons disable                                                                | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:26 UTC | 10 Jun 24 10:27 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-021732 addons                                                                        | addons-021732        | jenkins | v1.33.1 | 10 Jun 24 10:30 UTC | 10 Jun 24 10:30 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:21:49
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:21:49.316066   11511 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:21:49.316303   11511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:21:49.316312   11511 out.go:304] Setting ErrFile to fd 2...
	I0610 10:21:49.316316   11511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:21:49.316522   11511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:21:49.317167   11511 out.go:298] Setting JSON to false
	I0610 10:21:49.317958   11511 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":250,"bootTime":1718014659,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:21:49.318018   11511 start.go:139] virtualization: kvm guest
	I0610 10:21:49.320049   11511 out.go:177] * [addons-021732] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:21:49.321469   11511 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:21:49.322696   11511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:21:49.321485   11511 notify.go:220] Checking for updates...
	I0610 10:21:49.325037   11511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:21:49.326175   11511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:21:49.327541   11511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 10:21:49.328744   11511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:21:49.330331   11511 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:21:49.361160   11511 out.go:177] * Using the kvm2 driver based on user configuration
	I0610 10:21:49.362267   11511 start.go:297] selected driver: kvm2
	I0610 10:21:49.362283   11511 start.go:901] validating driver "kvm2" against <nil>
	I0610 10:21:49.362297   11511 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:21:49.363073   11511 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:21:49.363176   11511 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 10:21:49.377551   11511 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 10:21:49.377596   11511 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 10:21:49.377798   11511 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:21:49.377848   11511 cni.go:84] Creating CNI manager for ""
	I0610 10:21:49.377860   11511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 10:21:49.377867   11511 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:21:49.377911   11511 start.go:340] cluster config:
	{Name:addons-021732 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-021732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:21:49.378011   11511 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:21:49.383533   11511 out.go:177] * Starting "addons-021732" primary control-plane node in "addons-021732" cluster
	I0610 10:21:49.384833   11511 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:21:49.384867   11511 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 10:21:49.384874   11511 cache.go:56] Caching tarball of preloaded images
	I0610 10:21:49.384978   11511 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:21:49.384991   11511 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:21:49.385307   11511 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/config.json ...
	I0610 10:21:49.385329   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/config.json: {Name:mke7f6b1ae5b13865ef37639a6a871ad9f6270b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:21:49.385472   11511 start.go:360] acquireMachinesLock for addons-021732: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:21:49.385529   11511 start.go:364] duration metric: took 40.705µs to acquireMachinesLock for "addons-021732"
	I0610 10:21:49.385553   11511 start.go:93] Provisioning new machine with config: &{Name:addons-021732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-021732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:21:49.385611   11511 start.go:125] createHost starting for "" (driver="kvm2")
	I0610 10:21:49.387155   11511 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 10:21:49.387272   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:21:49.387305   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:21:49.402043   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38969
	I0610 10:21:49.402432   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:21:49.403003   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:21:49.403027   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:21:49.403329   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:21:49.403523   11511 main.go:141] libmachine: (addons-021732) Calling .GetMachineName
	I0610 10:21:49.403640   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:21:49.403854   11511 start.go:159] libmachine.API.Create for "addons-021732" (driver="kvm2")
	I0610 10:21:49.403875   11511 client.go:168] LocalClient.Create starting
	I0610 10:21:49.403908   11511 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 10:21:49.581205   11511 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 10:21:49.632176   11511 main.go:141] libmachine: Running pre-create checks...
	I0610 10:21:49.632201   11511 main.go:141] libmachine: (addons-021732) Calling .PreCreateCheck
	I0610 10:21:49.632780   11511 main.go:141] libmachine: (addons-021732) Calling .GetConfigRaw
	I0610 10:21:49.633269   11511 main.go:141] libmachine: Creating machine...
	I0610 10:21:49.633283   11511 main.go:141] libmachine: (addons-021732) Calling .Create
	I0610 10:21:49.633451   11511 main.go:141] libmachine: (addons-021732) Creating KVM machine...
	I0610 10:21:49.634744   11511 main.go:141] libmachine: (addons-021732) DBG | found existing default KVM network
	I0610 10:21:49.635659   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:49.635496   11534 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014730}
	I0610 10:21:49.635735   11511 main.go:141] libmachine: (addons-021732) DBG | created network xml: 
	I0610 10:21:49.635759   11511 main.go:141] libmachine: (addons-021732) DBG | <network>
	I0610 10:21:49.635771   11511 main.go:141] libmachine: (addons-021732) DBG |   <name>mk-addons-021732</name>
	I0610 10:21:49.635790   11511 main.go:141] libmachine: (addons-021732) DBG |   <dns enable='no'/>
	I0610 10:21:49.635802   11511 main.go:141] libmachine: (addons-021732) DBG |   
	I0610 10:21:49.635817   11511 main.go:141] libmachine: (addons-021732) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0610 10:21:49.635832   11511 main.go:141] libmachine: (addons-021732) DBG |     <dhcp>
	I0610 10:21:49.635849   11511 main.go:141] libmachine: (addons-021732) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0610 10:21:49.635862   11511 main.go:141] libmachine: (addons-021732) DBG |     </dhcp>
	I0610 10:21:49.635874   11511 main.go:141] libmachine: (addons-021732) DBG |   </ip>
	I0610 10:21:49.635886   11511 main.go:141] libmachine: (addons-021732) DBG |   
	I0610 10:21:49.635896   11511 main.go:141] libmachine: (addons-021732) DBG | </network>
	I0610 10:21:49.635907   11511 main.go:141] libmachine: (addons-021732) DBG | 
	I0610 10:21:49.641171   11511 main.go:141] libmachine: (addons-021732) DBG | trying to create private KVM network mk-addons-021732 192.168.39.0/24...
	I0610 10:21:49.706056   11511 main.go:141] libmachine: (addons-021732) DBG | private KVM network mk-addons-021732 192.168.39.0/24 created
	I0610 10:21:49.706112   11511 main.go:141] libmachine: (addons-021732) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732 ...
	I0610 10:21:49.706129   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:49.706030   11534 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:21:49.706150   11511 main.go:141] libmachine: (addons-021732) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 10:21:49.706246   11511 main.go:141] libmachine: (addons-021732) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 10:21:49.956545   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:49.956439   11534 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa...
	I0610 10:21:50.098561   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:50.098413   11534 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/addons-021732.rawdisk...
	I0610 10:21:50.098591   11511 main.go:141] libmachine: (addons-021732) DBG | Writing magic tar header
	I0610 10:21:50.098601   11511 main.go:141] libmachine: (addons-021732) DBG | Writing SSH key tar header
	I0610 10:21:50.098608   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:50.098524   11534 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732 ...
	I0610 10:21:50.098619   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732
	I0610 10:21:50.098651   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 10:21:50.098665   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732 (perms=drwx------)
	I0610 10:21:50.098675   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:21:50.098685   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 10:21:50.098691   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 10:21:50.098698   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home/jenkins
	I0610 10:21:50.098703   11511 main.go:141] libmachine: (addons-021732) DBG | Checking permissions on dir: /home
	I0610 10:21:50.098709   11511 main.go:141] libmachine: (addons-021732) DBG | Skipping /home - not owner
	I0610 10:21:50.098721   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 10:21:50.098734   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 10:21:50.098773   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 10:21:50.098802   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 10:21:50.098812   11511 main.go:141] libmachine: (addons-021732) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 10:21:50.098817   11511 main.go:141] libmachine: (addons-021732) Creating domain...
	I0610 10:21:50.099756   11511 main.go:141] libmachine: (addons-021732) define libvirt domain using xml: 
	I0610 10:21:50.099781   11511 main.go:141] libmachine: (addons-021732) <domain type='kvm'>
	I0610 10:21:50.099792   11511 main.go:141] libmachine: (addons-021732)   <name>addons-021732</name>
	I0610 10:21:50.099800   11511 main.go:141] libmachine: (addons-021732)   <memory unit='MiB'>4000</memory>
	I0610 10:21:50.099809   11511 main.go:141] libmachine: (addons-021732)   <vcpu>2</vcpu>
	I0610 10:21:50.099816   11511 main.go:141] libmachine: (addons-021732)   <features>
	I0610 10:21:50.099826   11511 main.go:141] libmachine: (addons-021732)     <acpi/>
	I0610 10:21:50.099836   11511 main.go:141] libmachine: (addons-021732)     <apic/>
	I0610 10:21:50.099848   11511 main.go:141] libmachine: (addons-021732)     <pae/>
	I0610 10:21:50.099863   11511 main.go:141] libmachine: (addons-021732)     
	I0610 10:21:50.099874   11511 main.go:141] libmachine: (addons-021732)   </features>
	I0610 10:21:50.099882   11511 main.go:141] libmachine: (addons-021732)   <cpu mode='host-passthrough'>
	I0610 10:21:50.099891   11511 main.go:141] libmachine: (addons-021732)   
	I0610 10:21:50.099931   11511 main.go:141] libmachine: (addons-021732)   </cpu>
	I0610 10:21:50.099942   11511 main.go:141] libmachine: (addons-021732)   <os>
	I0610 10:21:50.099955   11511 main.go:141] libmachine: (addons-021732)     <type>hvm</type>
	I0610 10:21:50.099966   11511 main.go:141] libmachine: (addons-021732)     <boot dev='cdrom'/>
	I0610 10:21:50.100011   11511 main.go:141] libmachine: (addons-021732)     <boot dev='hd'/>
	I0610 10:21:50.100042   11511 main.go:141] libmachine: (addons-021732)     <bootmenu enable='no'/>
	I0610 10:21:50.100050   11511 main.go:141] libmachine: (addons-021732)   </os>
	I0610 10:21:50.100057   11511 main.go:141] libmachine: (addons-021732)   <devices>
	I0610 10:21:50.100063   11511 main.go:141] libmachine: (addons-021732)     <disk type='file' device='cdrom'>
	I0610 10:21:50.100075   11511 main.go:141] libmachine: (addons-021732)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/boot2docker.iso'/>
	I0610 10:21:50.100082   11511 main.go:141] libmachine: (addons-021732)       <target dev='hdc' bus='scsi'/>
	I0610 10:21:50.100089   11511 main.go:141] libmachine: (addons-021732)       <readonly/>
	I0610 10:21:50.100094   11511 main.go:141] libmachine: (addons-021732)     </disk>
	I0610 10:21:50.100101   11511 main.go:141] libmachine: (addons-021732)     <disk type='file' device='disk'>
	I0610 10:21:50.100118   11511 main.go:141] libmachine: (addons-021732)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 10:21:50.100139   11511 main.go:141] libmachine: (addons-021732)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/addons-021732.rawdisk'/>
	I0610 10:21:50.100150   11511 main.go:141] libmachine: (addons-021732)       <target dev='hda' bus='virtio'/>
	I0610 10:21:50.100161   11511 main.go:141] libmachine: (addons-021732)     </disk>
	I0610 10:21:50.100169   11511 main.go:141] libmachine: (addons-021732)     <interface type='network'>
	I0610 10:21:50.100177   11511 main.go:141] libmachine: (addons-021732)       <source network='mk-addons-021732'/>
	I0610 10:21:50.100183   11511 main.go:141] libmachine: (addons-021732)       <model type='virtio'/>
	I0610 10:21:50.100189   11511 main.go:141] libmachine: (addons-021732)     </interface>
	I0610 10:21:50.100195   11511 main.go:141] libmachine: (addons-021732)     <interface type='network'>
	I0610 10:21:50.100202   11511 main.go:141] libmachine: (addons-021732)       <source network='default'/>
	I0610 10:21:50.100207   11511 main.go:141] libmachine: (addons-021732)       <model type='virtio'/>
	I0610 10:21:50.100212   11511 main.go:141] libmachine: (addons-021732)     </interface>
	I0610 10:21:50.100217   11511 main.go:141] libmachine: (addons-021732)     <serial type='pty'>
	I0610 10:21:50.100227   11511 main.go:141] libmachine: (addons-021732)       <target port='0'/>
	I0610 10:21:50.100236   11511 main.go:141] libmachine: (addons-021732)     </serial>
	I0610 10:21:50.100246   11511 main.go:141] libmachine: (addons-021732)     <console type='pty'>
	I0610 10:21:50.100252   11511 main.go:141] libmachine: (addons-021732)       <target type='serial' port='0'/>
	I0610 10:21:50.100262   11511 main.go:141] libmachine: (addons-021732)     </console>
	I0610 10:21:50.100287   11511 main.go:141] libmachine: (addons-021732)     <rng model='virtio'>
	I0610 10:21:50.100309   11511 main.go:141] libmachine: (addons-021732)       <backend model='random'>/dev/random</backend>
	I0610 10:21:50.100318   11511 main.go:141] libmachine: (addons-021732)     </rng>
	I0610 10:21:50.100325   11511 main.go:141] libmachine: (addons-021732)     
	I0610 10:21:50.100331   11511 main.go:141] libmachine: (addons-021732)     
	I0610 10:21:50.100341   11511 main.go:141] libmachine: (addons-021732)   </devices>
	I0610 10:21:50.100351   11511 main.go:141] libmachine: (addons-021732) </domain>
	I0610 10:21:50.100360   11511 main.go:141] libmachine: (addons-021732) 
	I0610 10:21:50.106211   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:e2:5e:51 in network default
	I0610 10:21:50.107734   11511 main.go:141] libmachine: (addons-021732) Ensuring networks are active...
	I0610 10:21:50.107758   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:50.108454   11511 main.go:141] libmachine: (addons-021732) Ensuring network default is active
	I0610 10:21:50.108728   11511 main.go:141] libmachine: (addons-021732) Ensuring network mk-addons-021732 is active
	I0610 10:21:50.109215   11511 main.go:141] libmachine: (addons-021732) Getting domain xml...
	I0610 10:21:50.109907   11511 main.go:141] libmachine: (addons-021732) Creating domain...
	I0610 10:21:51.349978   11511 main.go:141] libmachine: (addons-021732) Waiting to get IP...
	I0610 10:21:51.350666   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:51.351052   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:51.351077   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:51.351028   11534 retry.go:31] will retry after 227.859894ms: waiting for machine to come up
	I0610 10:21:51.580389   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:51.580808   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:51.580842   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:51.580770   11534 retry.go:31] will retry after 377.61731ms: waiting for machine to come up
	I0610 10:21:51.960306   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:51.960650   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:51.960684   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:51.960632   11534 retry.go:31] will retry after 425.397308ms: waiting for machine to come up
	I0610 10:21:52.387234   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:52.387657   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:52.387686   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:52.387631   11534 retry.go:31] will retry after 383.080459ms: waiting for machine to come up
	I0610 10:21:52.772105   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:52.772489   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:52.772514   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:52.772452   11534 retry.go:31] will retry after 606.763353ms: waiting for machine to come up
	I0610 10:21:53.381987   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:53.382481   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:53.382514   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:53.382428   11534 retry.go:31] will retry after 758.641117ms: waiting for machine to come up
	I0610 10:21:54.143101   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:54.143489   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:54.143510   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:54.143460   11534 retry.go:31] will retry after 1.125193015s: waiting for machine to come up
	I0610 10:21:55.270444   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:55.270880   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:55.270914   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:55.270850   11534 retry.go:31] will retry after 1.115970155s: waiting for machine to come up
	I0610 10:21:56.388121   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:56.388519   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:56.388545   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:56.388486   11534 retry.go:31] will retry after 1.346495635s: waiting for machine to come up
	I0610 10:21:57.736834   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:57.737297   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:57.737325   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:57.737234   11534 retry.go:31] will retry after 1.420732083s: waiting for machine to come up
	I0610 10:21:59.159782   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:21:59.160224   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:21:59.160253   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:21:59.160159   11534 retry.go:31] will retry after 2.590877904s: waiting for machine to come up
	I0610 10:22:01.754009   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:01.754437   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:22:01.754463   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:22:01.754388   11534 retry.go:31] will retry after 3.42062392s: waiting for machine to come up
	I0610 10:22:05.176466   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:05.176856   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find current IP address of domain addons-021732 in network mk-addons-021732
	I0610 10:22:05.176881   11511 main.go:141] libmachine: (addons-021732) DBG | I0610 10:22:05.176803   11534 retry.go:31] will retry after 4.163744632s: waiting for machine to come up
	I0610 10:22:09.345304   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.345784   11511 main.go:141] libmachine: (addons-021732) Found IP for machine: 192.168.39.244
	I0610 10:22:09.345820   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has current primary IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.345831   11511 main.go:141] libmachine: (addons-021732) Reserving static IP address...
	I0610 10:22:09.346208   11511 main.go:141] libmachine: (addons-021732) DBG | unable to find host DHCP lease matching {name: "addons-021732", mac: "52:54:00:70:72:ae", ip: "192.168.39.244"} in network mk-addons-021732
	I0610 10:22:09.417862   11511 main.go:141] libmachine: (addons-021732) DBG | Getting to WaitForSSH function...
	I0610 10:22:09.417939   11511 main.go:141] libmachine: (addons-021732) Reserved static IP address: 192.168.39.244
	I0610 10:22:09.417959   11511 main.go:141] libmachine: (addons-021732) Waiting for SSH to be available...
	I0610 10:22:09.420832   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.421379   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:09.421410   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.421757   11511 main.go:141] libmachine: (addons-021732) DBG | Using SSH client type: external
	I0610 10:22:09.421782   11511 main.go:141] libmachine: (addons-021732) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa (-rw-------)
	I0610 10:22:09.421818   11511 main.go:141] libmachine: (addons-021732) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.244 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:22:09.421833   11511 main.go:141] libmachine: (addons-021732) DBG | About to run SSH command:
	I0610 10:22:09.421846   11511 main.go:141] libmachine: (addons-021732) DBG | exit 0
	I0610 10:22:09.553527   11511 main.go:141] libmachine: (addons-021732) DBG | SSH cmd err, output: <nil>: 
	I0610 10:22:09.553801   11511 main.go:141] libmachine: (addons-021732) KVM machine creation complete!
	I0610 10:22:09.554174   11511 main.go:141] libmachine: (addons-021732) Calling .GetConfigRaw
	I0610 10:22:09.554749   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:09.554953   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:09.555226   11511 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 10:22:09.555246   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:09.556806   11511 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 10:22:09.556824   11511 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 10:22:09.556840   11511 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 10:22:09.556849   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:09.559569   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.559929   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:09.559955   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.560096   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:09.560302   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.560469   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.560613   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:09.560777   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:09.561022   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:09.561038   11511 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 10:22:09.660271   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:22:09.660292   11511 main.go:141] libmachine: Detecting the provisioner...
	I0610 10:22:09.660299   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:09.663173   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.663594   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:09.663630   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.663845   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:09.664042   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.664220   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.664345   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:09.664515   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:09.664717   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:09.664733   11511 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 10:22:09.765402   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 10:22:09.765480   11511 main.go:141] libmachine: found compatible host: buildroot
	I0610 10:22:09.765494   11511 main.go:141] libmachine: Provisioning with buildroot...
	I0610 10:22:09.765508   11511 main.go:141] libmachine: (addons-021732) Calling .GetMachineName
	I0610 10:22:09.765725   11511 buildroot.go:166] provisioning hostname "addons-021732"
	I0610 10:22:09.765749   11511 main.go:141] libmachine: (addons-021732) Calling .GetMachineName
	I0610 10:22:09.765929   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:09.768370   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.768711   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:09.768738   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.768867   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:09.769046   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.769209   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.769337   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:09.769495   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:09.769702   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:09.769722   11511 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-021732 && echo "addons-021732" | sudo tee /etc/hostname
	I0610 10:22:09.889179   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-021732
	
	I0610 10:22:09.889217   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:09.892330   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.892660   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:09.892700   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:09.892888   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:09.893099   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.893299   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:09.893456   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:09.893635   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:09.893793   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:09.893808   11511 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-021732' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-021732/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-021732' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:22:10.005934   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:22:10.005964   11511 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:22:10.006014   11511 buildroot.go:174] setting up certificates
	I0610 10:22:10.006034   11511 provision.go:84] configureAuth start
	I0610 10:22:10.006052   11511 main.go:141] libmachine: (addons-021732) Calling .GetMachineName
	I0610 10:22:10.006391   11511 main.go:141] libmachine: (addons-021732) Calling .GetIP
	I0610 10:22:10.009526   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.009931   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.009953   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.010070   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:10.012193   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.012556   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.012582   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.012747   11511 provision.go:143] copyHostCerts
	I0610 10:22:10.012847   11511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:22:10.013010   11511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:22:10.013093   11511 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:22:10.013160   11511 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.addons-021732 san=[127.0.0.1 192.168.39.244 addons-021732 localhost minikube]
	I0610 10:22:10.130372   11511 provision.go:177] copyRemoteCerts
	I0610 10:22:10.130433   11511 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:22:10.130455   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:10.133258   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.133608   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.133630   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.133786   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:10.133957   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.134132   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:10.134273   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:10.214993   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:22:10.237877   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 10:22:10.260926   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 10:22:10.284159   11511 provision.go:87] duration metric: took 278.109655ms to configureAuth
	I0610 10:22:10.284186   11511 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:22:10.284343   11511 config.go:182] Loaded profile config "addons-021732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:22:10.284406   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:10.287363   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.287723   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.287751   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.287899   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:10.288121   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.288322   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.288471   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:10.288643   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:10.288814   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:10.288831   11511 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:22:10.834878   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:22:10.834911   11511 main.go:141] libmachine: Checking connection to Docker...
	I0610 10:22:10.834923   11511 main.go:141] libmachine: (addons-021732) Calling .GetURL
	I0610 10:22:10.836450   11511 main.go:141] libmachine: (addons-021732) DBG | Using libvirt version 6000000
	I0610 10:22:10.838766   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.839129   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.839172   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.839314   11511 main.go:141] libmachine: Docker is up and running!
	I0610 10:22:10.839326   11511 main.go:141] libmachine: Reticulating splines...
	I0610 10:22:10.839334   11511 client.go:171] duration metric: took 21.435451924s to LocalClient.Create
	I0610 10:22:10.839361   11511 start.go:167] duration metric: took 21.435501976s to libmachine.API.Create "addons-021732"
	I0610 10:22:10.839373   11511 start.go:293] postStartSetup for "addons-021732" (driver="kvm2")
	I0610 10:22:10.839390   11511 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:22:10.839412   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:10.839654   11511 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:22:10.839676   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:10.841993   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.842280   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.842296   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.842457   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:10.842624   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.842797   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:10.842945   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:10.923172   11511 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:22:10.927454   11511 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:22:10.927481   11511 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:22:10.927551   11511 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:22:10.927573   11511 start.go:296] duration metric: took 88.191201ms for postStartSetup
	I0610 10:22:10.927602   11511 main.go:141] libmachine: (addons-021732) Calling .GetConfigRaw
	I0610 10:22:10.928177   11511 main.go:141] libmachine: (addons-021732) Calling .GetIP
	I0610 10:22:10.930881   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.931294   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.931314   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.931643   11511 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/config.json ...
	I0610 10:22:10.931868   11511 start.go:128] duration metric: took 21.546245786s to createHost
	I0610 10:22:10.931894   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:10.934754   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.935163   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:10.935194   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:10.935379   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:10.935559   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.935742   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:10.935864   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:10.936020   11511 main.go:141] libmachine: Using SSH client type: native
	I0610 10:22:10.936180   11511 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.244 22 <nil> <nil>}
	I0610 10:22:10.936190   11511 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:22:11.037977   11511 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718014930.997399018
	
	I0610 10:22:11.038002   11511 fix.go:216] guest clock: 1718014930.997399018
	I0610 10:22:11.038011   11511 fix.go:229] Guest: 2024-06-10 10:22:10.997399018 +0000 UTC Remote: 2024-06-10 10:22:10.931882063 +0000 UTC m=+21.648444948 (delta=65.516955ms)
	I0610 10:22:11.038060   11511 fix.go:200] guest clock delta is within tolerance: 65.516955ms
	I0610 10:22:11.038068   11511 start.go:83] releasing machines lock for "addons-021732", held for 21.652524556s
	I0610 10:22:11.038096   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:11.038405   11511 main.go:141] libmachine: (addons-021732) Calling .GetIP
	I0610 10:22:11.040989   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.041443   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:11.041471   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.041604   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:11.042090   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:11.042310   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:11.042413   11511 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:22:11.042452   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:11.042535   11511 ssh_runner.go:195] Run: cat /version.json
	I0610 10:22:11.042551   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:11.044973   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.045049   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.045383   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:11.045416   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:11.045439   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.045500   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:11.045586   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:11.045788   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:11.045790   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:11.045927   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:11.046000   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:11.046043   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:11.046103   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:11.046234   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:11.169207   11511 ssh_runner.go:195] Run: systemctl --version
	I0610 10:22:11.175146   11511 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:22:11.340855   11511 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:22:11.346606   11511 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:22:11.346664   11511 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:22:11.362850   11511 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 10:22:11.362877   11511 start.go:494] detecting cgroup driver to use...
	I0610 10:22:11.362936   11511 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:22:11.379694   11511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:22:11.393162   11511 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:22:11.393215   11511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:22:11.409101   11511 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:22:11.422412   11511 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:22:11.531476   11511 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:22:11.671983   11511 docker.go:233] disabling docker service ...
	I0610 10:22:11.672061   11511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:22:11.685151   11511 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:22:11.697547   11511 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:22:11.808530   11511 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:22:11.926000   11511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:22:11.939925   11511 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:22:11.957031   11511 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:22:11.957101   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:11.967189   11511 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:22:11.967259   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:11.977385   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:11.987234   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:11.997028   11511 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:22:12.008455   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:12.019735   11511 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:12.035735   11511 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:22:12.045595   11511 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:22:12.055221   11511 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 10:22:12.055286   11511 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 10:22:12.068098   11511 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:22:12.077742   11511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:22:12.194498   11511 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:22:12.324870   11511 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:22:12.324974   11511 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:22:12.329373   11511 start.go:562] Will wait 60s for crictl version
	I0610 10:22:12.329460   11511 ssh_runner.go:195] Run: which crictl
	I0610 10:22:12.332853   11511 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:22:12.371659   11511 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:22:12.371781   11511 ssh_runner.go:195] Run: crio --version
	I0610 10:22:12.397154   11511 ssh_runner.go:195] Run: crio --version
	I0610 10:22:12.427999   11511 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:22:12.429770   11511 main.go:141] libmachine: (addons-021732) Calling .GetIP
	I0610 10:22:12.432457   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:12.432818   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:12.432846   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:12.433055   11511 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:22:12.437030   11511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:22:12.449056   11511 kubeadm.go:877] updating cluster {Name:addons-021732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-021732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 10:22:12.449162   11511 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:22:12.449206   11511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:22:12.480175   11511 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0610 10:22:12.480258   11511 ssh_runner.go:195] Run: which lz4
	I0610 10:22:12.483870   11511 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 10:22:12.487927   11511 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 10:22:12.487967   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0610 10:22:13.679875   11511 crio.go:462] duration metric: took 1.196047703s to copy over tarball
	I0610 10:22:13.679944   11511 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 10:22:15.956372   11511 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.276395309s)
	I0610 10:22:15.956403   11511 crio.go:469] duration metric: took 2.276502967s to extract the tarball
	I0610 10:22:15.956412   11511 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 10:22:15.992742   11511 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:22:16.031848   11511 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:22:16.031870   11511 cache_images.go:84] Images are preloaded, skipping loading
	I0610 10:22:16.031878   11511 kubeadm.go:928] updating node { 192.168.39.244 8443 v1.30.1 crio true true} ...
	I0610 10:22:16.031969   11511 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-021732 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.244
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-021732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:22:16.032032   11511 ssh_runner.go:195] Run: crio config
	I0610 10:22:16.080464   11511 cni.go:84] Creating CNI manager for ""
	I0610 10:22:16.080482   11511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 10:22:16.080490   11511 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 10:22:16.080510   11511 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.244 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-021732 NodeName:addons-021732 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.244"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.244 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 10:22:16.080644   11511 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.244
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-021732"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.244
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.244"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 10:22:16.080716   11511 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:22:16.090288   11511 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 10:22:16.090368   11511 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 10:22:16.099170   11511 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0610 10:22:16.114472   11511 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:22:16.129333   11511 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0610 10:22:16.144593   11511 ssh_runner.go:195] Run: grep 192.168.39.244	control-plane.minikube.internal$ /etc/hosts
	I0610 10:22:16.148173   11511 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.244	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:22:16.159292   11511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:22:16.291407   11511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:22:16.307764   11511 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732 for IP: 192.168.39.244
	I0610 10:22:16.307790   11511 certs.go:194] generating shared ca certs ...
	I0610 10:22:16.307809   11511 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.307987   11511 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:22:16.360498   11511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt ...
	I0610 10:22:16.360527   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt: {Name:mka5aee245599ed1c73a6589e4bd7041817accf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.360720   11511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key ...
	I0610 10:22:16.360737   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key: {Name:mke821ebc9a1f87cafb59cae5dc616ee25e2a67c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.360837   11511 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:22:16.414996   11511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt ...
	I0610 10:22:16.415020   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt: {Name:mk1eb7154d51413f36bfe7ec5ebca9175f12c53f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.415195   11511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key ...
	I0610 10:22:16.415212   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key: {Name:mk3d8b84fe579a4f2beabd4c3f73806adb29637d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.415318   11511 certs.go:256] generating profile certs ...
	I0610 10:22:16.415393   11511 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.key
	I0610 10:22:16.415414   11511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt with IP's: []
	I0610 10:22:16.600910   11511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt ...
	I0610 10:22:16.600941   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: {Name:mk6ee51d7a9f9a0656ea660e6de93886eb2d79ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.601128   11511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.key ...
	I0610 10:22:16.601145   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.key: {Name:mkb4515cd87b5353d29e229ea3c778e43a085bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.601249   11511 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key.56f9b0b4
	I0610 10:22:16.601270   11511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt.56f9b0b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.244]
	I0610 10:22:16.648074   11511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt.56f9b0b4 ...
	I0610 10:22:16.648110   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt.56f9b0b4: {Name:mk2f86f460b055062ad012cbb6ae1733f96777ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.648304   11511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key.56f9b0b4 ...
	I0610 10:22:16.648323   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key.56f9b0b4: {Name:mkdea12d61593f69a56bef54ad06acc161e91f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.648420   11511 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt.56f9b0b4 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt
	I0610 10:22:16.648514   11511 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key.56f9b0b4 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key
	I0610 10:22:16.648580   11511 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.key
	I0610 10:22:16.648606   11511 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.crt with IP's: []
	I0610 10:22:16.866970   11511 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.crt ...
	I0610 10:22:16.867008   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.crt: {Name:mk267f1b3cdb4c073d022895f7afa4a7c60f29d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.867228   11511 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.key ...
	I0610 10:22:16.867251   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.key: {Name:mk702199a561852b7205391efdfb13e22bee7cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:16.867505   11511 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:22:16.867545   11511 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:22:16.867581   11511 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:22:16.867624   11511 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:22:16.868242   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:22:16.892285   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:22:16.914376   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:22:16.936371   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:22:16.958373   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0610 10:22:16.982406   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 10:22:17.017509   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:22:17.044204   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 10:22:17.066098   11511 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:22:17.087984   11511 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 10:22:17.103883   11511 ssh_runner.go:195] Run: openssl version
	I0610 10:22:17.109446   11511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:22:17.119696   11511 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:22:17.123615   11511 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:22:17.123668   11511 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:22:17.129098   11511 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:22:17.138759   11511 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:22:17.142392   11511 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 10:22:17.142441   11511 kubeadm.go:391] StartCluster: {Name:addons-021732 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-021732 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:22:17.142535   11511 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 10:22:17.142584   11511 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 10:22:17.176071   11511 cri.go:89] found id: ""
	I0610 10:22:17.176133   11511 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 10:22:17.185884   11511 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 10:22:17.194886   11511 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 10:22:17.203675   11511 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 10:22:17.203703   11511 kubeadm.go:156] found existing configuration files:
	
	I0610 10:22:17.203748   11511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 10:22:17.212071   11511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 10:22:17.212139   11511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 10:22:17.220828   11511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 10:22:17.229346   11511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 10:22:17.229413   11511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 10:22:17.238080   11511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 10:22:17.246321   11511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 10:22:17.246376   11511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 10:22:17.254950   11511 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 10:22:17.263256   11511 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 10:22:17.263317   11511 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 10:22:17.271922   11511 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 10:22:17.333616   11511 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 10:22:17.333697   11511 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 10:22:17.462068   11511 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 10:22:17.462205   11511 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 10:22:17.462426   11511 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 10:22:17.653282   11511 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 10:22:17.812077   11511 out.go:204]   - Generating certificates and keys ...
	I0610 10:22:17.812183   11511 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 10:22:17.812253   11511 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 10:22:18.044566   11511 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 10:22:18.349721   11511 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 10:22:18.621027   11511 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 10:22:18.898324   11511 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 10:22:19.050267   11511 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 10:22:19.050408   11511 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-021732 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0610 10:22:19.192253   11511 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 10:22:19.192427   11511 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-021732 localhost] and IPs [192.168.39.244 127.0.0.1 ::1]
	I0610 10:22:19.301659   11511 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 10:22:19.644535   11511 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 10:22:19.908664   11511 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 10:22:19.908825   11511 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 10:22:20.134821   11511 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 10:22:20.421465   11511 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 10:22:20.546558   11511 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 10:22:20.770192   11511 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 10:22:20.888676   11511 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 10:22:20.889201   11511 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 10:22:20.892246   11511 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 10:22:20.894114   11511 out.go:204]   - Booting up control plane ...
	I0610 10:22:20.894210   11511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 10:22:20.894284   11511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 10:22:20.894894   11511 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 10:22:20.910491   11511 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 10:22:20.911472   11511 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 10:22:20.911524   11511 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 10:22:21.039434   11511 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 10:22:21.039571   11511 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 10:22:21.541408   11511 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.892392ms
	I0610 10:22:21.541492   11511 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 10:22:28.044380   11511 kubeadm.go:309] [api-check] The API server is healthy after 6.501040579s
	I0610 10:22:28.056064   11511 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 10:22:28.073006   11511 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 10:22:28.099386   11511 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 10:22:28.099647   11511 kubeadm.go:309] [mark-control-plane] Marking the node addons-021732 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 10:22:28.111389   11511 kubeadm.go:309] [bootstrap-token] Using token: u7nktn.l02ueaavloy4yy05
	I0610 10:22:28.113155   11511 out.go:204]   - Configuring RBAC rules ...
	I0610 10:22:28.113302   11511 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 10:22:28.121519   11511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 10:22:28.134963   11511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 10:22:28.138911   11511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 10:22:28.142459   11511 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 10:22:28.145784   11511 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 10:22:28.449504   11511 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 10:22:28.902660   11511 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 10:22:29.450218   11511 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 10:22:29.450243   11511 kubeadm.go:309] 
	I0610 10:22:29.450337   11511 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 10:22:29.450361   11511 kubeadm.go:309] 
	I0610 10:22:29.450453   11511 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 10:22:29.450470   11511 kubeadm.go:309] 
	I0610 10:22:29.450519   11511 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 10:22:29.450601   11511 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 10:22:29.450682   11511 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 10:22:29.450692   11511 kubeadm.go:309] 
	I0610 10:22:29.450777   11511 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 10:22:29.450792   11511 kubeadm.go:309] 
	I0610 10:22:29.450867   11511 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 10:22:29.450877   11511 kubeadm.go:309] 
	I0610 10:22:29.450949   11511 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 10:22:29.451044   11511 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 10:22:29.451130   11511 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 10:22:29.451140   11511 kubeadm.go:309] 
	I0610 10:22:29.451263   11511 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 10:22:29.451370   11511 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 10:22:29.451382   11511 kubeadm.go:309] 
	I0610 10:22:29.451484   11511 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token u7nktn.l02ueaavloy4yy05 \
	I0610 10:22:29.451604   11511 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 10:22:29.451636   11511 kubeadm.go:309] 	--control-plane 
	I0610 10:22:29.451647   11511 kubeadm.go:309] 
	I0610 10:22:29.451751   11511 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 10:22:29.451760   11511 kubeadm.go:309] 
	I0610 10:22:29.451870   11511 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token u7nktn.l02ueaavloy4yy05 \
	I0610 10:22:29.452040   11511 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 10:22:29.452144   11511 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 10:22:29.452161   11511 cni.go:84] Creating CNI manager for ""
	I0610 10:22:29.452170   11511 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 10:22:29.454095   11511 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 10:22:29.455437   11511 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 10:22:29.465453   11511 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 10:22:29.486680   11511 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 10:22:29.486773   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:29.486877   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-021732 minikube.k8s.io/updated_at=2024_06_10T10_22_29_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=addons-021732 minikube.k8s.io/primary=true
	I0610 10:22:29.533644   11511 ops.go:34] apiserver oom_adj: -16
	I0610 10:22:29.648721   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:30.149074   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:30.648868   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:31.149416   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:31.649453   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:32.149201   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:32.649429   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:33.149343   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:33.649097   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:34.149718   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:34.648840   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:35.149571   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:35.648972   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:36.149208   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:36.649767   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:37.149623   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:37.648852   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:38.149733   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:38.649139   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:39.149383   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:39.648827   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:40.149366   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:40.648838   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:41.148981   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:41.649674   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:42.149418   11511 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:22:42.248475   11511 kubeadm.go:1107] duration metric: took 12.761770799s to wait for elevateKubeSystemPrivileges
	W0610 10:22:42.248513   11511 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 10:22:42.248523   11511 kubeadm.go:393] duration metric: took 25.106086137s to StartCluster
	I0610 10:22:42.248544   11511 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:42.248667   11511 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:22:42.249143   11511 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:22:42.249366   11511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 10:22:42.249388   11511 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.244 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:22:42.251164   11511 out.go:177] * Verifying Kubernetes components...
	I0610 10:22:42.249439   11511 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0610 10:22:42.251247   11511 addons.go:69] Setting yakd=true in profile "addons-021732"
	I0610 10:22:42.251261   11511 addons.go:69] Setting cloud-spanner=true in profile "addons-021732"
	I0610 10:22:42.251273   11511 addons.go:69] Setting registry=true in profile "addons-021732"
	I0610 10:22:42.251283   11511 addons.go:234] Setting addon yakd=true in "addons-021732"
	I0610 10:22:42.251289   11511 addons.go:234] Setting addon cloud-spanner=true in "addons-021732"
	I0610 10:22:42.251287   11511 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-021732"
	I0610 10:22:42.251301   11511 addons.go:69] Setting inspektor-gadget=true in profile "addons-021732"
	I0610 10:22:42.251314   11511 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-021732"
	I0610 10:22:42.251317   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251322   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251327   11511 addons.go:234] Setting addon inspektor-gadget=true in "addons-021732"
	I0610 10:22:42.251329   11511 addons.go:69] Setting storage-provisioner=true in profile "addons-021732"
	I0610 10:22:42.251349   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251344   11511 addons.go:69] Setting volcano=true in profile "addons-021732"
	I0610 10:22:42.251353   11511 addons.go:234] Setting addon storage-provisioner=true in "addons-021732"
	I0610 10:22:42.251384   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251388   11511 addons.go:234] Setting addon volcano=true in "addons-021732"
	I0610 10:22:42.251430   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.249634   11511 config.go:182] Loaded profile config "addons-021732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:22:42.251745   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.251749   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.251758   11511 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-021732"
	I0610 10:22:42.251762   11511 addons.go:69] Setting gcp-auth=true in profile "addons-021732"
	I0610 10:22:42.251760   11511 addons.go:69] Setting volumesnapshots=true in profile "addons-021732"
	I0610 10:22:42.251768   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251777   11511 mustload.go:65] Loading cluster: addons-021732
	I0610 10:22:42.251774   11511 addons.go:69] Setting helm-tiller=true in profile "addons-021732"
	I0610 10:22:42.251782   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251744   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.251798   11511 addons.go:234] Setting addon helm-tiller=true in "addons-021732"
	I0610 10:22:42.251801   11511 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-021732"
	I0610 10:22:42.251804   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251817   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251823   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251821   11511 addons.go:69] Setting default-storageclass=true in profile "addons-021732"
	I0610 10:22:42.251845   11511 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-021732"
	I0610 10:22:42.251784   11511 addons.go:234] Setting addon volumesnapshots=true in "addons-021732"
	I0610 10:22:42.257630   11511 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:22:42.251745   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.251250   11511 addons.go:69] Setting ingress-dns=true in profile "addons-021732"
	I0610 10:22:42.257776   11511 addons.go:234] Setting addon ingress-dns=true in "addons-021732"
	I0610 10:22:42.257821   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251761   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.257892   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251925   11511 config.go:182] Loaded profile config "addons-021732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:22:42.251940   11511 addons.go:69] Setting ingress=true in profile "addons-021732"
	I0610 10:22:42.258072   11511 addons.go:234] Setting addon ingress=true in "addons-021732"
	I0610 10:22:42.258113   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.251294   11511 addons.go:234] Setting addon registry=true in "addons-021732"
	I0610 10:22:42.258312   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.258336   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.258340   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.258448   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.258471   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251956   11511 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-021732"
	I0610 10:22:42.258565   11511 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-021732"
	I0610 10:22:42.258589   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.258677   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.251965   11511 addons.go:69] Setting metrics-server=true in profile "addons-021732"
	I0610 10:22:42.252138   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.252174   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.252193   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.252195   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.257735   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.258718   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.251990   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.258788   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.258864   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.258910   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.258918   11511 addons.go:234] Setting addon metrics-server=true in "addons-021732"
	I0610 10:22:42.258936   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.259044   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.259064   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.259219   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.259246   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.264602   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.265001   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.265050   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.272999   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0610 10:22:42.273483   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.274166   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.274186   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.278753   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33759
	I0610 10:22:42.279469   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.279481   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0610 10:22:42.279844   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.280071   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.280091   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.280441   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.280650   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0610 10:22:42.280999   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.281036   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.281070   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.281443   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.281463   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.281607   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.281617   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.281768   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.281998   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.289261   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.289308   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.289402   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.289438   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.289446   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.289466   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.293520   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.293607   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38867
	I0610 10:22:42.293724   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41717
	I0610 10:22:42.293798   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39599
	I0610 10:22:42.293856   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36151
	I0610 10:22:42.294420   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.294454   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.300302   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.300355   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.300424   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.301006   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.301024   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.301087   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.301559   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.301580   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.301652   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.301779   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.301788   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.301846   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.301959   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.301969   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.302173   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.302464   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.302799   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.302888   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.302949   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.303016   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.303835   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.303924   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.309234   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.309625   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.309670   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.317136   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45165
	I0610 10:22:42.317854   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.319685   11511 addons.go:234] Setting addon default-storageclass=true in "addons-021732"
	I0610 10:22:42.319731   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.320157   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.320189   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.321015   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.321037   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.321439   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.321988   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.322025   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.322255   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0610 10:22:42.322852   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.323348   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.323364   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.323714   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.324237   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.324273   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.326995   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I0610 10:22:42.327401   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.327843   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.327860   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.328192   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.328733   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.328767   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.328985   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0610 10:22:42.329545   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.330146   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.330172   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.330562   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.331198   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.331815   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36565
	I0610 10:22:42.332380   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.332895   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.332916   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.333262   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.333321   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.333601   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:42.333616   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:42.333818   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:42.333846   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:42.333867   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:42.333869   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.333879   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:42.333889   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:42.333904   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.334122   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:42.334140   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	W0610 10:22:42.334242   11511 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0610 10:22:42.347603   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0610 10:22:42.348140   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.348725   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.348742   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.349163   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.349387   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.349989   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46293
	I0610 10:22:42.350634   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.351119   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.351138   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.351452   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.351812   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.353965   11511 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-021732"
	I0610 10:22:42.354010   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:42.354376   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.354415   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.355376   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40833
	I0610 10:22:42.355499   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32809
	I0610 10:22:42.355589   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34981
	I0610 10:22:42.355708   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46739
	I0610 10:22:42.355778   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.356082   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.357719   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0610 10:22:42.356465   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.356512   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34521
	I0610 10:22:42.356560   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.356792   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.357181   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.359156   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0610 10:22:42.359169   11511 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0610 10:22:42.359188   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.359334   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.359963   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.360090   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.360104   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.360123   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.360135   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.360759   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.363153   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.364927   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.365015   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.365067   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.365093   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.365119   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.365152   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41507
	I0610 10:22:42.365290   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.365358   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.365422   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46549
	I0610 10:22:42.365608   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.365629   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.365757   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.365941   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.366375   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.366416   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.366541   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.366617   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.366713   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.367118   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.367132   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.367459   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.367533   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.367553   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.367899   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.367921   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.368102   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.368672   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.368728   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.369478   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.369516   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.370254   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.372758   11511 out.go:177]   - Using image docker.io/registry:2.8.3
	I0610 10:22:42.371588   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.373031   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0610 10:22:42.375297   11511 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0610 10:22:42.376566   11511 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0610 10:22:42.374649   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.374694   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.376309   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0610 10:22:42.376512   11511 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0610 10:22:42.377839   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0610 10:22:42.377861   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.377919   11511 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0610 10:22:42.377926   11511 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0610 10:22:42.377938   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.378739   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36175
	I0610 10:22:42.379014   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.379027   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.379155   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.379165   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.379348   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.379540   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.379597   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.379644   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.379772   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.382045   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.382062   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.382112   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40857
	I0610 10:22:42.382287   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.383034   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.383060   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.383087   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.383130   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.383214   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.383267   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.383865   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.383918   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.384494   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.384671   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.385572   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.385789   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.385810   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.387694   11511 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0610 10:22:42.386241   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.386342   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.386895   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.389064   11511 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0610 10:22:42.389085   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0610 10:22:42.389113   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.389191   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.389216   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.389300   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44357
	I0610 10:22:42.389311   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42675
	I0610 10:22:42.389453   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.389511   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.389529   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.389541   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.391152   11511 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
	I0610 10:22:42.390037   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.390129   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.390521   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.390565   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.390705   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.392382   11511 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0610 10:22:42.392553   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.393177   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.394215   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43245
	I0610 10:22:42.394512   11511 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 10:22:42.394458   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.394607   11511 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0610 10:22:42.396465   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.394653   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.396514   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.394717   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.395175   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.396564   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.395215   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.395256   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.395355   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.396923   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.396117   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.396408   11511 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:22:42.397028   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 10:22:42.397044   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.396590   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.397259   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.398769   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.398796   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.398953   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0610 10:22:42.397878   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.397893   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.398207   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.399854   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.401067   11511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0610 10:22:42.400135   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:42.400545   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.400753   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.401024   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.401367   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0610 10:22:42.401393   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.402145   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.402461   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:42.402763   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.404184   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.404280   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.404306   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.404470   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.405232   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0610 10:22:42.405331   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.405445   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36415
	I0610 10:22:42.405931   11511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0610 10:22:42.406177   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.406484   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.407359   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.407375   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0610 10:22:42.408564   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.408582   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.407462   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0610 10:22:42.407563   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.407855   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.407937   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.407971   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.408263   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.408988   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.410018   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0610 10:22:42.412147   11511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0610 10:22:42.410451   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.411034   11511 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0610 10:22:42.411348   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.411367   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.411382   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.411554   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.411658   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.413724   11511 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0610 10:22:42.414488   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0610 10:22:42.414507   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.415837   11511 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 10:22:42.415850   11511 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 10:22:42.415861   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.413913   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.415898   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.417071   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0610 10:22:42.414620   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.414759   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.416293   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.418367   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0610 10:22:42.416564   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.417373   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.417425   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.417492   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.421150   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0610 10:22:42.420077   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.420140   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.420355   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.421669   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.421696   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.422053   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.422422   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.422472   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.423537   11511 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0610 10:22:42.423565   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.423869   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.424611   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.426036   11511 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0610 10:22:42.426052   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0610 10:22:42.426063   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.427562   11511 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0610 10:22:42.423949   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.424631   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0610 10:22:42.424637   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.424828   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.425062   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35935
	I0610 10:22:42.425276   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.428735   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.429107   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.429124   11511 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0610 10:22:42.429380   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.430226   11511 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0610 10:22:42.430257   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.430468   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.432824   11511 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0610 10:22:42.430473   11511 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 10:22:42.430709   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:42.431491   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.431518   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0610 10:22:42.431663   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.431660   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.431679   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.434147   11511 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 10:22:42.434161   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0610 10:22:42.434681   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:42.435362   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:42.435488   11511 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 10:22:42.435501   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0610 10:22:42.435520   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.435575   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.435612   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.435700   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0610 10:22:42.435719   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.442249   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.442272   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.442284   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.442291   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.442312   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:42.442349   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.442504   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.442668   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.442711   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:42.442794   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.442941   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.444342   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:42.446306   11511 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0610 10:22:42.446308   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.444610   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.446336   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.446350   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.445575   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.447686   11511 out.go:177]   - Using image docker.io/busybox:stable
	I0610 10:22:42.445086   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.446373   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.446495   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.446629   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.446698   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.449010   11511 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0610 10:22:42.449030   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0610 10:22:42.447761   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.447801   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.449058   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.449076   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.447880   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.449109   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:42.448014   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.450026   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.450265   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.450593   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.450813   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.450976   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.452590   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.452996   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:42.453018   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:42.453313   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:42.453457   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:42.453604   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:42.453717   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:42.631538   11511 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 10:22:42.631563   11511 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:22:42.840669   11511 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 10:22:42.840692   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0610 10:22:42.868545   11511 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0610 10:22:42.868569   11511 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0610 10:22:42.895477   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0610 10:22:42.905161   11511 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0610 10:22:42.905187   11511 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0610 10:22:42.923174   11511 node_ready.go:35] waiting up to 6m0s for node "addons-021732" to be "Ready" ...
	I0610 10:22:42.926554   11511 node_ready.go:49] node "addons-021732" has status "Ready":"True"
	I0610 10:22:42.926576   11511 node_ready.go:38] duration metric: took 3.376822ms for node "addons-021732" to be "Ready" ...
	I0610 10:22:42.926583   11511 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:22:42.932885   11511 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jnxqr" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:42.961126   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0610 10:22:42.970642   11511 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0610 10:22:42.970671   11511 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0610 10:22:42.977915   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0610 10:22:42.993070   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0610 10:22:42.998860   11511 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0610 10:22:42.998880   11511 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0610 10:22:43.020779   11511 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0610 10:22:43.020799   11511 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0610 10:22:43.032088   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0610 10:22:43.032111   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0610 10:22:43.049458   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 10:22:43.054731   11511 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0610 10:22:43.054752   11511 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0610 10:22:43.064414   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:22:43.066140   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0610 10:22:43.084655   11511 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 10:22:43.084690   11511 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 10:22:43.142738   11511 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0610 10:22:43.142762   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0610 10:22:43.189500   11511 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0610 10:22:43.189529   11511 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0610 10:22:43.233642   11511 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0610 10:22:43.233667   11511 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0610 10:22:43.286609   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0610 10:22:43.286631   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0610 10:22:43.296691   11511 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0610 10:22:43.296712   11511 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0610 10:22:43.298001   11511 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0610 10:22:43.298018   11511 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0610 10:22:43.345105   11511 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 10:22:43.345134   11511 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 10:22:43.355667   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0610 10:22:43.358700   11511 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0610 10:22:43.358724   11511 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0610 10:22:43.450778   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0610 10:22:43.476495   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0610 10:22:43.476515   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0610 10:22:43.492222   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0610 10:22:43.492242   11511 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0610 10:22:43.496126   11511 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0610 10:22:43.496143   11511 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0610 10:22:43.530452   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 10:22:43.546331   11511 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0610 10:22:43.546353   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0610 10:22:43.603303   11511 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0610 10:22:43.603342   11511 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0610 10:22:43.637692   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0610 10:22:43.637713   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0610 10:22:43.672226   11511 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 10:22:43.672249   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0610 10:22:43.751557   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0610 10:22:43.785082   11511 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0610 10:22:43.785107   11511 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0610 10:22:43.838512   11511 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0610 10:22:43.838544   11511 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0610 10:22:43.876935   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 10:22:43.970330   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0610 10:22:43.970359   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0610 10:22:44.082890   11511 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.451313731s)
	I0610 10:22:44.082924   11511 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0610 10:22:44.230551   11511 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0610 10:22:44.230578   11511 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0610 10:22:44.397124   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0610 10:22:44.397149   11511 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0610 10:22:44.587186   11511 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-021732" context rescaled to 1 replicas
	I0610 10:22:44.601643   11511 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0610 10:22:44.601677   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0610 10:22:44.707722   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0610 10:22:44.707752   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0610 10:22:44.939893   11511 pod_ready.go:102] pod "coredns-7db6d8ff4d-jnxqr" in "kube-system" namespace has status "Ready":"False"
	I0610 10:22:44.968117   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0610 10:22:45.020032   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0610 10:22:45.020064   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0610 10:22:45.334174   11511 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 10:22:45.334202   11511 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0610 10:22:45.479866   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0610 10:22:47.140850   11511 pod_ready.go:102] pod "coredns-7db6d8ff4d-jnxqr" in "kube-system" namespace has status "Ready":"False"
	I0610 10:22:47.478634   11511 pod_ready.go:92] pod "coredns-7db6d8ff4d-jnxqr" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.478654   11511 pod_ready.go:81] duration metric: took 4.545740316s for pod "coredns-7db6d8ff4d-jnxqr" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.478666   11511 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rx46l" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.525350   11511 pod_ready.go:92] pod "coredns-7db6d8ff4d-rx46l" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.525374   11511 pod_ready.go:81] duration metric: took 46.702228ms for pod "coredns-7db6d8ff4d-rx46l" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.525388   11511 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.597922   11511 pod_ready.go:92] pod "etcd-addons-021732" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.597951   11511 pod_ready.go:81] duration metric: took 72.544019ms for pod "etcd-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.597962   11511 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.690175   11511 pod_ready.go:92] pod "kube-apiserver-addons-021732" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.690196   11511 pod_ready.go:81] duration metric: took 92.228748ms for pod "kube-apiserver-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.690206   11511 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.745062   11511 pod_ready.go:92] pod "kube-controller-manager-addons-021732" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.745089   11511 pod_ready.go:81] duration metric: took 54.875224ms for pod "kube-controller-manager-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.745102   11511 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7846w" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.876761   11511 pod_ready.go:92] pod "kube-proxy-7846w" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:47.876787   11511 pod_ready.go:81] duration metric: took 131.677995ms for pod "kube-proxy-7846w" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:47.876803   11511 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:48.257894   11511 pod_ready.go:92] pod "kube-scheduler-addons-021732" in "kube-system" namespace has status "Ready":"True"
	I0610 10:22:48.257916   11511 pod_ready.go:81] duration metric: took 381.105399ms for pod "kube-scheduler-addons-021732" in "kube-system" namespace to be "Ready" ...
	I0610 10:22:48.257924   11511 pod_ready.go:38] duration metric: took 5.331331023s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:22:48.257938   11511 api_server.go:52] waiting for apiserver process to appear ...
	I0610 10:22:48.257997   11511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:22:49.497202   11511 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0610 10:22:49.497244   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:49.500209   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:49.500597   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:49.500625   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:49.500789   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:49.501030   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:49.501204   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:49.501340   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:49.649840   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.754319111s)
	I0610 10:22:49.649895   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.688738286s)
	I0610 10:22:49.649965   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.649982   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650010   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.656913321s)
	I0610 10:22:49.650029   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.600544568s)
	I0610 10:22:49.649963   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.672019554s)
	I0610 10:22:49.650058   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.649903   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650078   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650065   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650090   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650099   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.585663994s)
	I0610 10:22:49.650118   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650145   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650208   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.584042877s)
	I0610 10:22:49.650239   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650257   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650361   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.294650882s)
	I0610 10:22:49.650388   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650079   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650398   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.199585223s)
	I0610 10:22:49.650423   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650437   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650467   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.119986687s)
	I0610 10:22:49.650402   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650484   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650491   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650045   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650547   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.898959734s)
	I0610 10:22:49.650550   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.650561   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.650570   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652395   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652405   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652414   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652415   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652425   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.652432   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652483   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652489   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652497   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.652697   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652714   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652719   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652724   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652500   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652734   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.652746   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652754   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652755   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652755   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652804   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.652812   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652535   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655031   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655047   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.655055   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652537   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655111   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655126   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.652553   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652552   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652569   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652587   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652575   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655244   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655254   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.655262   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652605   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655303   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.655313   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.652637   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655362   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655380   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.655391   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652656   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.652676   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655407   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655416   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.655427   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.652519   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.654588   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.654605   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.654608   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655641   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.655663   11511 addons.go:475] Verifying addon registry=true in "addons-021732"
	I0610 10:22:49.654631   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.654628   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.654648   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.654964   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.654995   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.655133   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.655322   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.652620   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.655350   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.655998   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.656016   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.656096   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.656104   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.656108   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.656996   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.658143   11511 out.go:177] * Verifying registry addon...
	I0610 10:22:49.658168   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.658189   11511 addons.go:475] Verifying addon metrics-server=true in "addons-021732"
	I0610 10:22:49.658191   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.658190   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.658209   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.658249   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.658250   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.659647   11511 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-021732 service yakd-dashboard -n yakd-dashboard
	
	I0610 10:22:49.658161   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.661130   11511 addons.go:475] Verifying addon ingress=true in "addons-021732"
	I0610 10:22:49.658198   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.661164   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.661179   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.662567   11511 out.go:177] * Verifying ingress addon...
	I0610 10:22:49.658409   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.658538   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.658589   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.658404   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.661410   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.661469   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.661864   11511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0610 10:22:49.664128   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.665092   11511 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0610 10:22:49.665926   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.665954   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.701091   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.701111   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.701357   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.701378   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.701412   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	W0610 10:22:49.701482   11511 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0610 10:22:49.705302   11511 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0610 10:22:49.705327   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:49.707126   11511 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0610 10:22:49.707145   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:49.718340   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:49.718360   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:49.718675   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:49.718710   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:49.718727   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:49.928096   11511 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0610 10:22:50.007454   11511 addons.go:234] Setting addon gcp-auth=true in "addons-021732"
	I0610 10:22:50.007519   11511 host.go:66] Checking if "addons-021732" exists ...
	I0610 10:22:50.007814   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:50.007850   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:50.022912   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43173
	I0610 10:22:50.023391   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:50.023860   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:50.023888   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:50.024224   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:50.024736   11511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:22:50.024766   11511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:22:50.040973   11511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38147
	I0610 10:22:50.041424   11511 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:22:50.041889   11511 main.go:141] libmachine: Using API Version  1
	I0610 10:22:50.041913   11511 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:22:50.042278   11511 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:22:50.042481   11511 main.go:141] libmachine: (addons-021732) Calling .GetState
	I0610 10:22:50.044253   11511 main.go:141] libmachine: (addons-021732) Calling .DriverName
	I0610 10:22:50.044481   11511 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0610 10:22:50.044507   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHHostname
	I0610 10:22:50.047170   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:50.047615   11511 main.go:141] libmachine: (addons-021732) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:72:ae", ip: ""} in network mk-addons-021732: {Iface:virbr1 ExpiryTime:2024-06-10 11:22:03 +0000 UTC Type:0 Mac:52:54:00:70:72:ae Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:addons-021732 Clientid:01:52:54:00:70:72:ae}
	I0610 10:22:50.047646   11511 main.go:141] libmachine: (addons-021732) DBG | domain addons-021732 has defined IP address 192.168.39.244 and MAC address 52:54:00:70:72:ae in network mk-addons-021732
	I0610 10:22:50.047779   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHPort
	I0610 10:22:50.047977   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHKeyPath
	I0610 10:22:50.048144   11511 main.go:141] libmachine: (addons-021732) Calling .GetSSHUsername
	I0610 10:22:50.048315   11511 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/addons-021732/id_rsa Username:docker}
	I0610 10:22:50.257224   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:50.257352   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:50.368881   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.40072667s)
	I0610 10:22:50.368938   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:50.368969   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:50.368968   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.491960707s)
	W0610 10:22:50.369008   11511 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0610 10:22:50.369037   11511 retry.go:31] will retry after 331.98459ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0610 10:22:50.369315   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:50.369336   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:50.369346   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:50.369355   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:50.369582   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:50.369588   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:50.369600   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:50.673908   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:50.674124   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:50.702062   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0610 10:22:51.171998   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:51.172513   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:51.676216   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:51.682391   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:52.177551   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:52.191357   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:52.369412   11511 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.111391758s)
	I0610 10:22:52.369445   11511 api_server.go:72] duration metric: took 10.120029627s to wait for apiserver process to appear ...
	I0610 10:22:52.369453   11511 api_server.go:88] waiting for apiserver healthz status ...
	I0610 10:22:52.369420   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.889510306s)
	I0610 10:22:52.369471   11511 api_server.go:253] Checking apiserver healthz at https://192.168.39.244:8443/healthz ...
	I0610 10:22:52.369484   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:52.369500   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:52.369483   11511 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.324980008s)
	I0610 10:22:52.371588   11511 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0610 10:22:52.369785   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:52.369803   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:52.372751   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:52.372765   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:52.374034   11511 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0610 10:22:52.372776   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:52.375293   11511 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0610 10:22:52.375304   11511 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0610 10:22:52.375531   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:52.375545   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:52.375555   11511 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-021732"
	I0610 10:22:52.376861   11511 out.go:177] * Verifying csi-hostpath-driver addon...
	I0610 10:22:52.375533   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:52.378083   11511 api_server.go:279] https://192.168.39.244:8443/healthz returned 200:
	ok
	I0610 10:22:52.378721   11511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0610 10:22:52.380376   11511 api_server.go:141] control plane version: v1.30.1
	I0610 10:22:52.380394   11511 api_server.go:131] duration metric: took 10.936438ms to wait for apiserver health ...
	I0610 10:22:52.380401   11511 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 10:22:52.390060   11511 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 10:22:52.390080   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:52.398694   11511 system_pods.go:59] 18 kube-system pods found
	I0610 10:22:52.398728   11511 system_pods.go:61] "coredns-7db6d8ff4d-jnxqr" [698b6a09-55b9-4a70-8733-9c95667a8f2d] Running
	I0610 10:22:52.398735   11511 system_pods.go:61] "coredns-7db6d8ff4d-rx46l" [8198dacc-399a-413f-ba9c-1721544a3b9a] Running
	I0610 10:22:52.398745   11511 system_pods.go:61] "csi-hostpath-attacher-0" [d1cf1ab9-7f35-4dd9-aa47-9bc40f3875ad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 10:22:52.398754   11511 system_pods.go:61] "csi-hostpathplugin-9gl88" [9285d121-5350-4eb2-a327-bafaf090e4d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 10:22:52.398762   11511 system_pods.go:61] "etcd-addons-021732" [07cfab01-cdff-4d4b-bf7f-aec5026381cb] Running
	I0610 10:22:52.398766   11511 system_pods.go:61] "kube-apiserver-addons-021732" [f3743640-5f88-4d65-a5d6-178669bc90b9] Running
	I0610 10:22:52.398770   11511 system_pods.go:61] "kube-controller-manager-addons-021732" [d02e8430-937d-4dc5-acf6-d07ee42cdfc3] Running
	I0610 10:22:52.398775   11511 system_pods.go:61] "kube-ingress-dns-minikube" [3e396de4-1f67-49cc-8b15-180ef259e715] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 10:22:52.398782   11511 system_pods.go:61] "kube-proxy-7846w" [49d2baed-2c3e-4858-8479-918a31ae3835] Running
	I0610 10:22:52.398791   11511 system_pods.go:61] "kube-scheduler-addons-021732" [20886653-83d8-4491-9a79-a417565db2b5] Running
	I0610 10:22:52.398799   11511 system_pods.go:61] "metrics-server-c59844bb4-5lbmz" [9560fdac-7849-4123-9b3f-b4042539052c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 10:22:52.398805   11511 system_pods.go:61] "nvidia-device-plugin-daemonset-2zf77" [6e61695c-8992-480f-826d-23a9f83617e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0610 10:22:52.398814   11511 system_pods.go:61] "registry-proxy-lq94h" [4b7b9e8d-e9e9-450e-877e-156e3a37a859] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0610 10:22:52.398824   11511 system_pods.go:61] "registry-xmm5t" [50b19bb8-aabd-4c89-a304-877505b561a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0610 10:22:52.398837   11511 system_pods.go:61] "snapshot-controller-745499f584-8f7kt" [ed854b08-22d6-4798-b11a-1966d9e683c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 10:22:52.398849   11511 system_pods.go:61] "snapshot-controller-745499f584-qbgdf" [928f0775-d90f-49b2-9b80-727b1dfaca99] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 10:22:52.398859   11511 system_pods.go:61] "storage-provisioner" [93dd7c04-05d2-42a7-9762-bdb57fa30867] Running
	I0610 10:22:52.398867   11511 system_pods.go:61] "tiller-deploy-6677d64bcd-86c76" [3257a893-b201-4088-be48-fb02698a0350] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0610 10:22:52.398878   11511 system_pods.go:74] duration metric: took 18.470489ms to wait for pod list to return data ...
	I0610 10:22:52.398887   11511 default_sa.go:34] waiting for default service account to be created ...
	I0610 10:22:52.407980   11511 default_sa.go:45] found service account: "default"
	I0610 10:22:52.408002   11511 default_sa.go:55] duration metric: took 9.106206ms for default service account to be created ...
	I0610 10:22:52.408011   11511 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 10:22:52.434757   11511 system_pods.go:86] 19 kube-system pods found
	I0610 10:22:52.434784   11511 system_pods.go:89] "coredns-7db6d8ff4d-jnxqr" [698b6a09-55b9-4a70-8733-9c95667a8f2d] Running
	I0610 10:22:52.434789   11511 system_pods.go:89] "coredns-7db6d8ff4d-rx46l" [8198dacc-399a-413f-ba9c-1721544a3b9a] Running
	I0610 10:22:52.434796   11511 system_pods.go:89] "csi-hostpath-attacher-0" [d1cf1ab9-7f35-4dd9-aa47-9bc40f3875ad] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0610 10:22:52.434802   11511 system_pods.go:89] "csi-hostpath-resizer-0" [d68124c2-14de-475d-92db-90cc6cef8080] Pending
	I0610 10:22:52.434813   11511 system_pods.go:89] "csi-hostpathplugin-9gl88" [9285d121-5350-4eb2-a327-bafaf090e4d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0610 10:22:52.434818   11511 system_pods.go:89] "etcd-addons-021732" [07cfab01-cdff-4d4b-bf7f-aec5026381cb] Running
	I0610 10:22:52.434823   11511 system_pods.go:89] "kube-apiserver-addons-021732" [f3743640-5f88-4d65-a5d6-178669bc90b9] Running
	I0610 10:22:52.434827   11511 system_pods.go:89] "kube-controller-manager-addons-021732" [d02e8430-937d-4dc5-acf6-d07ee42cdfc3] Running
	I0610 10:22:52.434833   11511 system_pods.go:89] "kube-ingress-dns-minikube" [3e396de4-1f67-49cc-8b15-180ef259e715] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0610 10:22:52.434838   11511 system_pods.go:89] "kube-proxy-7846w" [49d2baed-2c3e-4858-8479-918a31ae3835] Running
	I0610 10:22:52.434843   11511 system_pods.go:89] "kube-scheduler-addons-021732" [20886653-83d8-4491-9a79-a417565db2b5] Running
	I0610 10:22:52.434851   11511 system_pods.go:89] "metrics-server-c59844bb4-5lbmz" [9560fdac-7849-4123-9b3f-b4042539052c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 10:22:52.434858   11511 system_pods.go:89] "nvidia-device-plugin-daemonset-2zf77" [6e61695c-8992-480f-826d-23a9f83617e8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0610 10:22:52.434867   11511 system_pods.go:89] "registry-proxy-lq94h" [4b7b9e8d-e9e9-450e-877e-156e3a37a859] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0610 10:22:52.434873   11511 system_pods.go:89] "registry-xmm5t" [50b19bb8-aabd-4c89-a304-877505b561a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0610 10:22:52.434881   11511 system_pods.go:89] "snapshot-controller-745499f584-8f7kt" [ed854b08-22d6-4798-b11a-1966d9e683c3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 10:22:52.434887   11511 system_pods.go:89] "snapshot-controller-745499f584-qbgdf" [928f0775-d90f-49b2-9b80-727b1dfaca99] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0610 10:22:52.434894   11511 system_pods.go:89] "storage-provisioner" [93dd7c04-05d2-42a7-9762-bdb57fa30867] Running
	I0610 10:22:52.434900   11511 system_pods.go:89] "tiller-deploy-6677d64bcd-86c76" [3257a893-b201-4088-be48-fb02698a0350] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0610 10:22:52.434909   11511 system_pods.go:126] duration metric: took 26.892692ms to wait for k8s-apps to be running ...
	I0610 10:22:52.434916   11511 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 10:22:52.434957   11511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:22:52.463106   11511 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0610 10:22:52.463144   11511 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0610 10:22:52.550984   11511 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 10:22:52.551014   11511 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0610 10:22:52.659044   11511 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0610 10:22:52.672523   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:52.672580   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:52.814331   11511 system_svc.go:56] duration metric: took 379.407198ms WaitForService to wait for kubelet
	I0610 10:22:52.814360   11511 kubeadm.go:576] duration metric: took 10.564945233s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:22:52.814378   11511 node_conditions.go:102] verifying NodePressure condition ...
	I0610 10:22:52.814540   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.11242968s)
	I0610 10:22:52.814577   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:52.814594   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:52.814874   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:52.814901   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:52.814910   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:52.814918   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:52.815114   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:52.815131   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:52.817674   11511 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:22:52.817698   11511 node_conditions.go:123] node cpu capacity is 2
	I0610 10:22:52.817708   11511 node_conditions.go:105] duration metric: took 3.32605ms to run NodePressure ...
	I0610 10:22:52.817719   11511 start.go:240] waiting for startup goroutines ...
	I0610 10:22:52.884396   11511 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0610 10:22:52.884418   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:53.174185   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:53.177498   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:53.389443   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:53.677832   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:53.687884   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:53.898077   11511 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.238991459s)
	I0610 10:22:53.898127   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:53.898142   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:53.898456   11511 main.go:141] libmachine: (addons-021732) DBG | Closing plugin on server side
	I0610 10:22:53.898475   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:53.898530   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:53.898545   11511 main.go:141] libmachine: Making call to close driver server
	I0610 10:22:53.898545   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:53.898556   11511 main.go:141] libmachine: (addons-021732) Calling .Close
	I0610 10:22:53.898758   11511 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:22:53.898775   11511 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:22:53.900610   11511 addons.go:475] Verifying addon gcp-auth=true in "addons-021732"
	I0610 10:22:53.902522   11511 out.go:177] * Verifying gcp-auth addon...
	I0610 10:22:53.904583   11511 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0610 10:22:53.918273   11511 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0610 10:22:53.918292   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:54.171654   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:54.171841   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:54.384432   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:54.408148   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:54.672485   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:54.672968   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:54.884581   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:54.907790   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:55.172557   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:55.172559   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:55.384775   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:55.407765   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:55.671746   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:55.672170   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:55.883931   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:55.907597   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:56.171561   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:56.171566   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:56.383752   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:56.407766   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:56.672150   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:56.672337   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:56.884198   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:56.908542   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:57.170838   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:57.171440   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:57.384051   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:57.407831   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:57.671791   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:57.672548   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:57.886612   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:57.908656   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:58.175898   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:58.190708   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:58.383802   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:58.411405   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:58.674273   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:58.675914   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:58.888036   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:58.908673   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:59.172833   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:59.173390   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:59.384366   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:59.408696   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:22:59.837161   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:22:59.841029   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:22:59.884889   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:22:59.908456   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:00.172198   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:00.172267   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:00.384633   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:00.409379   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:00.671502   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:00.672203   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:00.883750   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:00.909001   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:01.170972   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:01.170999   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:01.383457   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:01.409374   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:01.672489   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:01.672713   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:01.885372   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:01.908109   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:02.171811   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:02.171959   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:02.384745   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:02.408990   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:02.673193   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:02.675022   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:02.884717   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:02.909609   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:03.172151   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:03.172796   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:03.384432   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:03.408343   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:03.672316   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:03.673023   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:03.885523   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:03.909004   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:04.170985   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:04.173781   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:04.384467   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:04.408445   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:04.672149   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:04.672836   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:05.223028   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:05.223363   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:05.225389   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:05.227587   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:05.385246   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:05.408661   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:05.671038   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:05.671390   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:05.883795   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:05.908409   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:06.172173   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:06.172888   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:06.383705   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:06.408219   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:06.671221   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:06.671398   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:06.883856   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:06.908105   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:07.170377   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:07.170763   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:07.384458   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:07.408130   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:07.998014   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:07.998469   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:08.001596   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:08.002676   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:08.172127   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:08.174672   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:08.383359   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:08.408459   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:08.671454   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:08.671740   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:08.884590   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:08.907648   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:09.171174   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:09.173514   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:09.384631   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:09.407985   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:09.671397   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:09.671501   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:09.883982   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:09.909069   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:10.172729   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:10.173212   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:10.383979   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:10.408303   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:10.670979   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:10.671082   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:10.884147   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:10.908430   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:11.171590   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:11.172078   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:11.385004   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:11.407949   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:11.672754   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:11.673011   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:11.884847   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:11.908261   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:12.171143   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:12.171199   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:12.384583   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:12.408073   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:12.670620   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:12.670976   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:12.884174   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:12.908857   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:13.171436   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:13.171589   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:13.384247   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:13.894705   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:13.896741   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:13.896864   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:13.897162   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:13.913517   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:14.171270   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:14.171434   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:14.384241   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:14.408812   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:14.670031   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:14.670322   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:14.886371   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:14.916040   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:15.170285   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:15.173439   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:15.385153   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:15.408503   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:15.670745   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:15.671496   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:15.885613   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:15.908755   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:16.171608   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:16.172554   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:16.384535   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:16.407752   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:16.671453   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:16.671784   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:16.884422   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:16.908588   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:17.172138   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:17.172345   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:17.384452   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:17.408041   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:17.671187   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:17.672309   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:17.884669   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:17.907587   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:18.171326   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:18.172565   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:18.384800   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:18.408280   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:18.670696   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:18.670936   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:18.884513   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:18.907857   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:19.171373   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:19.172365   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:19.384661   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:19.410273   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:19.671510   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:19.671619   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:19.890663   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:19.907781   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:20.171191   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:20.173470   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:20.388113   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:20.408233   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:20.671363   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:20.671633   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:20.883853   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:20.909039   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:21.170480   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:21.170732   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:21.384885   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:21.407592   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:21.670904   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:21.671200   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:21.889410   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:21.908422   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:22.171912   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:22.172778   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:22.384170   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:22.408341   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:22.672003   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:22.672879   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:22.884276   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:22.908911   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:23.171608   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:23.171641   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:23.383758   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:23.408313   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:23.670989   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:23.671081   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:23.886101   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:23.907558   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:24.170876   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:24.171471   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:24.384464   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:24.408109   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:24.672684   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:24.673254   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:24.886841   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:24.908191   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:25.170027   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:25.170907   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:25.386280   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:25.408658   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:25.670846   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:25.671150   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:25.883727   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:25.908012   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:26.169901   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:26.171043   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:26.384262   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:26.408310   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:26.670374   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:26.670601   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:26.884534   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:26.914248   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:27.170024   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:27.171510   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:27.396067   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:27.408189   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:27.671692   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:27.671829   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:27.884223   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:27.908714   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:28.173702   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:28.175116   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:28.384243   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:28.409720   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:28.671275   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:28.671372   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:28.883880   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:28.908519   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:29.171484   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:29.171989   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:29.384166   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:29.408582   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:29.670221   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:29.671091   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:29.884096   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:29.910267   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:30.170287   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:30.170815   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:30.384201   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:30.408652   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:30.671017   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:30.671094   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:30.884318   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:30.908414   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:31.172996   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:31.174167   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:31.383897   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:31.408524   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:31.671355   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:31.671366   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:31.886649   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:31.915198   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:32.171683   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:32.171947   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:32.384216   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:32.408661   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:32.671008   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:32.672250   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:32.884518   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:32.908827   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:33.172377   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:33.172615   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:33.384900   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:33.408007   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:33.671417   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:33.671843   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:33.884659   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:33.908559   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:34.172084   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:34.172771   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:34.384308   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:34.408261   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:34.673076   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:34.673573   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:34.883640   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:34.907832   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:35.171975   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:35.172116   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:35.385551   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:35.409100   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:35.772417   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:35.772661   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:35.884404   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:35.909009   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:36.172106   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:36.172496   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:36.384238   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:36.408236   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:36.671055   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:36.671343   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:36.883982   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:36.908073   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:37.171393   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:37.172618   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:37.384357   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:37.408329   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:37.671982   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:37.672571   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:37.886299   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:37.908464   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:38.170652   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:38.170958   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0610 10:23:38.384842   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:38.408209   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:38.670114   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:38.671362   11511 kapi.go:107] duration metric: took 49.009497245s to wait for kubernetes.io/minikube-addons=registry ...
	I0610 10:23:38.884147   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:38.908317   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:39.170356   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:39.385541   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:39.408214   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:39.671234   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:39.883626   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:39.908317   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:40.170122   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:40.385156   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:40.407762   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:40.671079   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:40.925091   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:40.927065   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:41.170456   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:41.384814   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:41.407788   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:41.671879   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:42.209673   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:42.211757   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:42.212047   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:42.384288   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:42.408320   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:42.670782   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:42.883866   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:42.908293   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:43.170224   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:43.384967   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:43.408752   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:43.671107   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:43.885075   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:43.908530   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:44.171198   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:44.384761   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:44.408694   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:44.670036   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:44.884652   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:44.908840   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:45.171029   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:45.384024   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:45.408553   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:45.670798   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:45.890236   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:45.908781   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:46.170468   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:46.384005   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:46.408524   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:46.670289   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:46.883910   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:46.908571   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:47.170882   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:47.384031   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:47.408356   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:47.670041   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:47.887057   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:47.915200   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:48.169889   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:48.383997   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:48.408530   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:48.670012   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:48.884722   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:48.908146   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:49.170435   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:49.384228   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:49.409027   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:49.670446   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:49.883850   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:49.908516   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:50.171093   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:50.384298   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:50.408235   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:50.670393   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:50.887356   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:50.909859   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:51.487319   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:51.488110   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:51.489102   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:51.670096   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:51.884902   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:51.908010   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:52.169725   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:52.383715   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:52.407755   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:52.670039   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:52.884494   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:52.908283   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:53.172224   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:53.387937   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:53.411197   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:53.672613   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:53.888105   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:53.910988   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:54.169645   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:54.384731   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:54.409849   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:54.670829   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:54.885134   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:54.911376   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:55.170629   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:55.389245   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:55.407975   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:55.670903   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:55.884651   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:55.908573   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:56.171546   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:56.384694   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:56.408887   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:56.670549   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:56.883824   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:56.908001   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:57.170810   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:57.384755   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:57.408041   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:57.670241   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:57.889299   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:57.908776   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:58.171046   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:58.669636   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:58.670787   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:58.673486   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:58.885765   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:58.909582   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:59.171814   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:59.383849   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:59.408089   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:23:59.670091   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:23:59.884376   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:23:59.908753   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:00.170700   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:00.383760   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:00.408341   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:00.670303   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:00.885988   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:00.909215   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:01.170203   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:01.384917   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:01.408488   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:01.670794   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:01.883865   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:01.908066   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:02.169900   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:02.384380   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:02.408823   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:02.677775   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:02.884267   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:02.908231   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:03.170556   11511 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0610 10:24:03.384547   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:03.408070   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:03.670388   11511 kapi.go:107] duration metric: took 1m14.005293415s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0610 10:24:03.886645   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:03.908262   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:04.385168   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:04.408853   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:04.891800   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:04.909627   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:05.383956   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:05.409019   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:05.884077   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:05.908308   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:06.384596   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:06.408649   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:06.885923   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:06.908734   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:07.384570   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:07.407939   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:07.884646   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:07.907860   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0610 10:24:08.383697   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:08.408439   11511 kapi.go:107] duration metric: took 1m14.503855428s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0610 10:24:08.410827   11511 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-021732 cluster.
	I0610 10:24:08.412379   11511 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0610 10:24:08.413802   11511 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0610 10:24:08.884881   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:09.385034   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:09.886134   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:10.385463   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:10.886586   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:11.395962   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:11.884316   11511 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0610 10:24:12.384732   11511 kapi.go:107] duration metric: took 1m20.006008s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0610 10:24:12.386442   11511 out.go:177] * Enabled addons: helm-tiller, metrics-server, nvidia-device-plugin, storage-provisioner, yakd, cloud-spanner, ingress-dns, default-storageclass, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0610 10:24:12.387697   11511 addons.go:510] duration metric: took 1m30.138256587s for enable addons: enabled=[helm-tiller metrics-server nvidia-device-plugin storage-provisioner yakd cloud-spanner ingress-dns default-storageclass inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0610 10:24:12.387737   11511 start.go:245] waiting for cluster config update ...
	I0610 10:24:12.387754   11511 start.go:254] writing updated cluster config ...
	I0610 10:24:12.387998   11511 ssh_runner.go:195] Run: rm -f paused
	I0610 10:24:12.441934   11511 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 10:24:12.443663   11511 out.go:177] * Done! kubectl is now configured to use "addons-021732" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.267124877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ee0fcec-6943-47f4-9b75-4284ad942830 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.267655996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc22745643d6853b725260aed1b923e4584d8b14d0021f8f9b42a046e6c006fe,PodSandboxId:7e193706ef9096110b87737cbf61070b4684f0d86473e3a97d0d532143683b26,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718015221412470525,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-d88fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01572e27-a714-4633-aeea-7e662365ce75,},Annotations:map[string]string{io.kubernetes.container.hash: afd70b2c,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac27835e9bc37e01e901f96ca22c17fd5d02c7d3cc7abe3fb4ed6575a85ef8b,PodSandboxId:eff348790a47f8fccfe3d62e61d16d70653ec33b3f6cf8419aa3b33179bdeda1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718015081034721462,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8105de8a-be57-47d3-ade8-89321c7029b7,},Annotations:map[string]string{io.kubern
etes.container.hash: 73535256,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa2d32e1d9b0154c320a5ace8ef9295cb40018f53b9a1bc29ea84f16ddc2b,PodSandboxId:b5f07ed2ec364ee9893a3550df2d612fed6c86ed923e4a81d732270590f4d9e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718015059655946675,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b726p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 53f367ca-294c-4305-b2f4-54c5bb185ad9,},Annotations:map[string]string{io.kubernetes.container.hash: 213be43e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a040b0631871c5631fa7c1e5e37c49b6b4f9b576d1bbfe02db04511ebf3231a,PodSandboxId:64b5dd5d40e45f5aa8acbda35a4ed96ef9b876b7b5286e0ad969e9fee9290dd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718015047140617852,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-p48fw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c0f8acd7-1aba-434e-9c69-1e2108046b61,},Annotations:map[string]string{io.kubernetes.container.hash: 5cdb680c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d323b52af89b5b92bab3b6f19c893aa65fe3177a46cf1454bd513381522b7,PodSandboxId:21444c38a2d27266b67340bde858e6ca2cd849b2b108ea0c7958a7e96447a333,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17180
15023423319032,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-p8pv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: c0ef4698-bf75-4680-bcfa-95167d27a615,},Annotations:map[string]string{io.kubernetes.container.hash: 282b5fcd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:949c1eb00eb5e8487e589e8300238d291cfc98df4afb881fd561cf758cc78ef6,PodSandboxId:dae785f5d0a0f34d4612019df92ecf91213ba4898a357a7b65a2b10fc4b41d98,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1718015008088664561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-68cv5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fcfe5ad1-9315-4ca6-acfe-1a989c307a55,},Annotations:map[string]string{io.kubernetes.container.hash: f797413,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f53b89d046de317315a4195871d181a2ce396fd05e111ab9650e4efb84b51608,PodSandboxId:b1735eeeb605452e888eb5401196ed44b99504553e895e81479b30aa570a7a78,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412
e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718015001717463769,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5lbmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9560fdac-7849-4123-9b3f-b4042539052c,},Annotations:map[string]string{io.kubernetes.container.hash: 27f580fc,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d4218e2abaf26e52a0b15b1daec5f8d45a248f3c62521a5bd620e6cb39ac51,PodSandboxId:7c251429316808727435b4d9092a1cb11bf9f9a0bb64787ad073f709a6c94386,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718014968932542999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93dd7c04-05d2-42a7-9762-bdb57fa30867,},Annotations:map[string]string{io.kubernetes.container.hash: 12e2039,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d51f47f6cffd10ed84592ac370dda69205489c5b11d84b22f2bb4811e54fb4,PodSandboxId:a6bb9746ad3545c7b750d9aa7b2d1480c282ab769205f5af0f084f92aa3f85af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718014965442219523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rx46l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8198dacc-399a-413f-ba9c-1721544a3b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 612745aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854f
803d622f6fd0c7bc120aed1cbbe06fc982cea0d1ba840b2ce765d2bbb8a,PodSandboxId:c243608ad14ca90465a6848bb87ab08e6cb01492a5045785e4f1a25a90e05e25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718014963381522056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d2baed-2c3e-4858-8479-918a31ae3835,},Annotations:map[string]string{io.kubernetes.container.hash: d55409fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88fdec6d7662e6f142e2c4782941d1b014d72
5747ad82975d2a3af2d75fbbac,PodSandboxId:61b23db931e05e46b90fe420f2edfdd903899b6855c49a060161dc9cefe5fb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718014943395397339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1774aac21a5451245d407877bf5c9b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 9ae77e1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0d847f6ad591ffc2d8685f17d719d307927d57b03dac385bc80de1cd722f69,PodSandboxId:930d4bd
f4e6e5c97e64cb524f36fbfd135a3ef984f46eafc10705f3540a5d4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718014943341283465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f133aeec1950f817d39a425134e254,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1038be0f6076f6cad62c595b27f3cfd98459c8cb35b6a6e90c6b673fad8e174,PodSandboxId:a40d1bbcc2adfbe1ac233ca4
ad30f4a34b6db12b8adb16beda8e5b77f887f4b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718014943351427312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 672548f328c46b786476290618e6a09f,},Annotations:map[string]string{io.kubernetes.container.hash: 70606478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4c6832ab8029c82de1b8e68e8894ee49e06552c2cb431ccc85768db866a227,PodSandboxId:850f971a165dae1a6d3908d49d28dbc88bfbdec1b
d4c5b831b0a5c02a4c4a360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718014943335262502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e8d90f3cb5861300be12c4a927a655,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ee0fcec-6943-47f4-9b75-4284ad942830 name=/runtime.v1.RuntimeService/ListC
ontainers
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.288213771Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=1fba850b-cd32-48cf-bbc6-0e33b8710345 name=/runtime.v1.RuntimeService/Status
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.288291684Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1fba850b-cd32-48cf-bbc6-0e33b8710345 name=/runtime.v1.RuntimeService/Status
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.303954957Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1cee892c-0be4-40b0-91f6-d48785a2b772 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.304028100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1cee892c-0be4-40b0-91f6-d48785a2b772 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.305726169Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5809a7ff-4ce5-4edc-a768-5edbcdd73cbc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.307148829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718015424307117670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5809a7ff-4ce5-4edc-a768-5edbcdd73cbc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.307854761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be7e18bd-d680-4bd9-853b-5cc8d9adb5d1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.307908802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be7e18bd-d680-4bd9-853b-5cc8d9adb5d1 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.308250047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc22745643d6853b725260aed1b923e4584d8b14d0021f8f9b42a046e6c006fe,PodSandboxId:7e193706ef9096110b87737cbf61070b4684f0d86473e3a97d0d532143683b26,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718015221412470525,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-d88fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01572e27-a714-4633-aeea-7e662365ce75,},Annotations:map[string]string{io.kubernetes.container.hash: afd70b2c,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac27835e9bc37e01e901f96ca22c17fd5d02c7d3cc7abe3fb4ed6575a85ef8b,PodSandboxId:eff348790a47f8fccfe3d62e61d16d70653ec33b3f6cf8419aa3b33179bdeda1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718015081034721462,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8105de8a-be57-47d3-ade8-89321c7029b7,},Annotations:map[string]string{io.kubern
etes.container.hash: 73535256,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa2d32e1d9b0154c320a5ace8ef9295cb40018f53b9a1bc29ea84f16ddc2b,PodSandboxId:b5f07ed2ec364ee9893a3550df2d612fed6c86ed923e4a81d732270590f4d9e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718015059655946675,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b726p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 53f367ca-294c-4305-b2f4-54c5bb185ad9,},Annotations:map[string]string{io.kubernetes.container.hash: 213be43e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a040b0631871c5631fa7c1e5e37c49b6b4f9b576d1bbfe02db04511ebf3231a,PodSandboxId:64b5dd5d40e45f5aa8acbda35a4ed96ef9b876b7b5286e0ad969e9fee9290dd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718015047140617852,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-p48fw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c0f8acd7-1aba-434e-9c69-1e2108046b61,},Annotations:map[string]string{io.kubernetes.container.hash: 5cdb680c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d323b52af89b5b92bab3b6f19c893aa65fe3177a46cf1454bd513381522b7,PodSandboxId:21444c38a2d27266b67340bde858e6ca2cd849b2b108ea0c7958a7e96447a333,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17180
15023423319032,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-p8pv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: c0ef4698-bf75-4680-bcfa-95167d27a615,},Annotations:map[string]string{io.kubernetes.container.hash: 282b5fcd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:949c1eb00eb5e8487e589e8300238d291cfc98df4afb881fd561cf758cc78ef6,PodSandboxId:dae785f5d0a0f34d4612019df92ecf91213ba4898a357a7b65a2b10fc4b41d98,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1718015008088664561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-68cv5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fcfe5ad1-9315-4ca6-acfe-1a989c307a55,},Annotations:map[string]string{io.kubernetes.container.hash: f797413,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f53b89d046de317315a4195871d181a2ce396fd05e111ab9650e4efb84b51608,PodSandboxId:b1735eeeb605452e888eb5401196ed44b99504553e895e81479b30aa570a7a78,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412
e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718015001717463769,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5lbmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9560fdac-7849-4123-9b3f-b4042539052c,},Annotations:map[string]string{io.kubernetes.container.hash: 27f580fc,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d4218e2abaf26e52a0b15b1daec5f8d45a248f3c62521a5bd620e6cb39ac51,PodSandboxId:7c251429316808727435b4d9092a1cb11bf9f9a0bb64787ad073f709a6c94386,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718014968932542999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93dd7c04-05d2-42a7-9762-bdb57fa30867,},Annotations:map[string]string{io.kubernetes.container.hash: 12e2039,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d51f47f6cffd10ed84592ac370dda69205489c5b11d84b22f2bb4811e54fb4,PodSandboxId:a6bb9746ad3545c7b750d9aa7b2d1480c282ab769205f5af0f084f92aa3f85af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718014965442219523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rx46l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8198dacc-399a-413f-ba9c-1721544a3b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 612745aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854f
803d622f6fd0c7bc120aed1cbbe06fc982cea0d1ba840b2ce765d2bbb8a,PodSandboxId:c243608ad14ca90465a6848bb87ab08e6cb01492a5045785e4f1a25a90e05e25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718014963381522056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d2baed-2c3e-4858-8479-918a31ae3835,},Annotations:map[string]string{io.kubernetes.container.hash: d55409fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88fdec6d7662e6f142e2c4782941d1b014d72
5747ad82975d2a3af2d75fbbac,PodSandboxId:61b23db931e05e46b90fe420f2edfdd903899b6855c49a060161dc9cefe5fb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718014943395397339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1774aac21a5451245d407877bf5c9b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 9ae77e1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0d847f6ad591ffc2d8685f17d719d307927d57b03dac385bc80de1cd722f69,PodSandboxId:930d4bd
f4e6e5c97e64cb524f36fbfd135a3ef984f46eafc10705f3540a5d4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718014943341283465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f133aeec1950f817d39a425134e254,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1038be0f6076f6cad62c595b27f3cfd98459c8cb35b6a6e90c6b673fad8e174,PodSandboxId:a40d1bbcc2adfbe1ac233ca4
ad30f4a34b6db12b8adb16beda8e5b77f887f4b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718014943351427312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 672548f328c46b786476290618e6a09f,},Annotations:map[string]string{io.kubernetes.container.hash: 70606478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4c6832ab8029c82de1b8e68e8894ee49e06552c2cb431ccc85768db866a227,PodSandboxId:850f971a165dae1a6d3908d49d28dbc88bfbdec1b
d4c5b831b0a5c02a4c4a360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718014943335262502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e8d90f3cb5861300be12c4a927a655,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be7e18bd-d680-4bd9-853b-5cc8d9adb5d1 name=/runtime.v1.RuntimeService/ListC
ontainers
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.342003803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa4e6bca-013d-43f2-a2e0-aefc7b684cd1 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.342093739Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa4e6bca-013d-43f2-a2e0-aefc7b684cd1 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.343352287Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9746d4d6-6ddc-4777-b8a8-e62beb7e9951 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.344550498Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718015424344523913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9746d4d6-6ddc-4777-b8a8-e62beb7e9951 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.345288853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fb81876-d30c-4813-a67d-37dab0d1b10b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.345353301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fb81876-d30c-4813-a67d-37dab0d1b10b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.345646841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc22745643d6853b725260aed1b923e4584d8b14d0021f8f9b42a046e6c006fe,PodSandboxId:7e193706ef9096110b87737cbf61070b4684f0d86473e3a97d0d532143683b26,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718015221412470525,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-d88fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01572e27-a714-4633-aeea-7e662365ce75,},Annotations:map[string]string{io.kubernetes.container.hash: afd70b2c,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac27835e9bc37e01e901f96ca22c17fd5d02c7d3cc7abe3fb4ed6575a85ef8b,PodSandboxId:eff348790a47f8fccfe3d62e61d16d70653ec33b3f6cf8419aa3b33179bdeda1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718015081034721462,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8105de8a-be57-47d3-ade8-89321c7029b7,},Annotations:map[string]string{io.kubern
etes.container.hash: 73535256,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa2d32e1d9b0154c320a5ace8ef9295cb40018f53b9a1bc29ea84f16ddc2b,PodSandboxId:b5f07ed2ec364ee9893a3550df2d612fed6c86ed923e4a81d732270590f4d9e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718015059655946675,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b726p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 53f367ca-294c-4305-b2f4-54c5bb185ad9,},Annotations:map[string]string{io.kubernetes.container.hash: 213be43e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a040b0631871c5631fa7c1e5e37c49b6b4f9b576d1bbfe02db04511ebf3231a,PodSandboxId:64b5dd5d40e45f5aa8acbda35a4ed96ef9b876b7b5286e0ad969e9fee9290dd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718015047140617852,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-p48fw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c0f8acd7-1aba-434e-9c69-1e2108046b61,},Annotations:map[string]string{io.kubernetes.container.hash: 5cdb680c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d323b52af89b5b92bab3b6f19c893aa65fe3177a46cf1454bd513381522b7,PodSandboxId:21444c38a2d27266b67340bde858e6ca2cd849b2b108ea0c7958a7e96447a333,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17180
15023423319032,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-p8pv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: c0ef4698-bf75-4680-bcfa-95167d27a615,},Annotations:map[string]string{io.kubernetes.container.hash: 282b5fcd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:949c1eb00eb5e8487e589e8300238d291cfc98df4afb881fd561cf758cc78ef6,PodSandboxId:dae785f5d0a0f34d4612019df92ecf91213ba4898a357a7b65a2b10fc4b41d98,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1718015008088664561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-68cv5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fcfe5ad1-9315-4ca6-acfe-1a989c307a55,},Annotations:map[string]string{io.kubernetes.container.hash: f797413,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f53b89d046de317315a4195871d181a2ce396fd05e111ab9650e4efb84b51608,PodSandboxId:b1735eeeb605452e888eb5401196ed44b99504553e895e81479b30aa570a7a78,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412
e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718015001717463769,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5lbmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9560fdac-7849-4123-9b3f-b4042539052c,},Annotations:map[string]string{io.kubernetes.container.hash: 27f580fc,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d4218e2abaf26e52a0b15b1daec5f8d45a248f3c62521a5bd620e6cb39ac51,PodSandboxId:7c251429316808727435b4d9092a1cb11bf9f9a0bb64787ad073f709a6c94386,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718014968932542999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93dd7c04-05d2-42a7-9762-bdb57fa30867,},Annotations:map[string]string{io.kubernetes.container.hash: 12e2039,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d51f47f6cffd10ed84592ac370dda69205489c5b11d84b22f2bb4811e54fb4,PodSandboxId:a6bb9746ad3545c7b750d9aa7b2d1480c282ab769205f5af0f084f92aa3f85af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718014965442219523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rx46l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8198dacc-399a-413f-ba9c-1721544a3b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 612745aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854f
803d622f6fd0c7bc120aed1cbbe06fc982cea0d1ba840b2ce765d2bbb8a,PodSandboxId:c243608ad14ca90465a6848bb87ab08e6cb01492a5045785e4f1a25a90e05e25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718014963381522056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d2baed-2c3e-4858-8479-918a31ae3835,},Annotations:map[string]string{io.kubernetes.container.hash: d55409fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88fdec6d7662e6f142e2c4782941d1b014d72
5747ad82975d2a3af2d75fbbac,PodSandboxId:61b23db931e05e46b90fe420f2edfdd903899b6855c49a060161dc9cefe5fb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718014943395397339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1774aac21a5451245d407877bf5c9b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 9ae77e1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0d847f6ad591ffc2d8685f17d719d307927d57b03dac385bc80de1cd722f69,PodSandboxId:930d4bd
f4e6e5c97e64cb524f36fbfd135a3ef984f46eafc10705f3540a5d4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718014943341283465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f133aeec1950f817d39a425134e254,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1038be0f6076f6cad62c595b27f3cfd98459c8cb35b6a6e90c6b673fad8e174,PodSandboxId:a40d1bbcc2adfbe1ac233ca4
ad30f4a34b6db12b8adb16beda8e5b77f887f4b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718014943351427312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 672548f328c46b786476290618e6a09f,},Annotations:map[string]string{io.kubernetes.container.hash: 70606478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4c6832ab8029c82de1b8e68e8894ee49e06552c2cb431ccc85768db866a227,PodSandboxId:850f971a165dae1a6d3908d49d28dbc88bfbdec1b
d4c5b831b0a5c02a4c4a360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718014943335262502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e8d90f3cb5861300be12c4a927a655,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fb81876-d30c-4813-a67d-37dab0d1b10b name=/runtime.v1.RuntimeService/ListC
ontainers
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.381412954Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e8cdb95-6c7a-40bd-8f9e-805a53c6ac15 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.381500601Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e8cdb95-6c7a-40bd-8f9e-805a53c6ac15 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.382400874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac0c5f30-1a3a-4147-9de2-9e7988b01b22 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.383801894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718015424383762559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584737,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac0c5f30-1a3a-4147-9de2-9e7988b01b22 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.384310437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=801f6456-2a27-4a52-9634-bc50758bbafc name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.384389340Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=801f6456-2a27-4a52-9634-bc50758bbafc name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:30:24 addons-021732 crio[678]: time="2024-06-10 10:30:24.384700766Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc22745643d6853b725260aed1b923e4584d8b14d0021f8f9b42a046e6c006fe,PodSandboxId:7e193706ef9096110b87737cbf61070b4684f0d86473e3a97d0d532143683b26,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1718015221412470525,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-d88fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 01572e27-a714-4633-aeea-7e662365ce75,},Annotations:map[string]string{io.kubernetes.container.hash: afd70b2c,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ac27835e9bc37e01e901f96ca22c17fd5d02c7d3cc7abe3fb4ed6575a85ef8b,PodSandboxId:eff348790a47f8fccfe3d62e61d16d70653ec33b3f6cf8419aa3b33179bdeda1,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1718015081034721462,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8105de8a-be57-47d3-ade8-89321c7029b7,},Annotations:map[string]string{io.kubern
etes.container.hash: 73535256,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5faa2d32e1d9b0154c320a5ace8ef9295cb40018f53b9a1bc29ea84f16ddc2b,PodSandboxId:b5f07ed2ec364ee9893a3550df2d612fed6c86ed923e4a81d732270590f4d9e0,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2cfebb9f82f21165fc736638311c2d6b6961fa0226a8164a753cbb589f6b1e43,State:CONTAINER_RUNNING,CreatedAt:1718015059655946675,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7fc69f7444-b726p,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: 53f367ca-294c-4305-b2f4-54c5bb185ad9,},Annotations:map[string]string{io.kubernetes.container.hash: 213be43e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a040b0631871c5631fa7c1e5e37c49b6b4f9b576d1bbfe02db04511ebf3231a,PodSandboxId:64b5dd5d40e45f5aa8acbda35a4ed96ef9b876b7b5286e0ad969e9fee9290dd5,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1718015047140617852,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-p48fw,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: c0f8acd7-1aba-434e-9c69-1e2108046b61,},Annotations:map[string]string{io.kubernetes.container.hash: 5cdb680c,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d323b52af89b5b92bab3b6f19c893aa65fe3177a46cf1454bd513381522b7,PodSandboxId:21444c38a2d27266b67340bde858e6ca2cd849b2b108ea0c7958a7e96447a333,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:17180
15023423319032,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-p8pv2,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: c0ef4698-bf75-4680-bcfa-95167d27a615,},Annotations:map[string]string{io.kubernetes.container.hash: 282b5fcd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:949c1eb00eb5e8487e589e8300238d291cfc98df4afb881fd561cf758cc78ef6,PodSandboxId:dae785f5d0a0f34d4612019df92ecf91213ba4898a357a7b65a2b10fc4b41d98,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1718015008088664561,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-68cv5,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: fcfe5ad1-9315-4ca6-acfe-1a989c307a55,},Annotations:map[string]string{io.kubernetes.container.hash: f797413,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f53b89d046de317315a4195871d181a2ce396fd05e111ab9650e4efb84b51608,PodSandboxId:b1735eeeb605452e888eb5401196ed44b99504553e895e81479b30aa570a7a78,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412
e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1718015001717463769,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-5lbmz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9560fdac-7849-4123-9b3f-b4042539052c,},Annotations:map[string]string{io.kubernetes.container.hash: 27f580fc,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80d4218e2abaf26e52a0b15b1daec5f8d45a248f3c62521a5bd620e6cb39ac51,PodSandboxId:7c251429316808727435b4d9092a1cb11bf9f9a0bb64787ad073f709a6c94386,Metadata:&ContainerMetadata{Name:storage-pro
visioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718014968932542999,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93dd7c04-05d2-42a7-9762-bdb57fa30867,},Annotations:map[string]string{io.kubernetes.container.hash: 12e2039,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d51f47f6cffd10ed84592ac370dda69205489c5b11d84b22f2bb4811e54fb4,PodSandboxId:a6bb9746ad3545c7b750d9aa7b2d1480c282ab769205f5af0f084f92aa3f85af,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Imag
e:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718014965442219523,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rx46l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8198dacc-399a-413f-ba9c-1721544a3b9a,},Annotations:map[string]string{io.kubernetes.container.hash: 612745aa,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8854f
803d622f6fd0c7bc120aed1cbbe06fc982cea0d1ba840b2ce765d2bbb8a,PodSandboxId:c243608ad14ca90465a6848bb87ab08e6cb01492a5045785e4f1a25a90e05e25,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718014963381522056,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7846w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49d2baed-2c3e-4858-8479-918a31ae3835,},Annotations:map[string]string{io.kubernetes.container.hash: d55409fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b88fdec6d7662e6f142e2c4782941d1b014d72
5747ad82975d2a3af2d75fbbac,PodSandboxId:61b23db931e05e46b90fe420f2edfdd903899b6855c49a060161dc9cefe5fb00,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718014943395397339,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1774aac21a5451245d407877bf5c9b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 9ae77e1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0d847f6ad591ffc2d8685f17d719d307927d57b03dac385bc80de1cd722f69,PodSandboxId:930d4bd
f4e6e5c97e64cb524f36fbfd135a3ef984f46eafc10705f3540a5d4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718014943341283465,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 88f133aeec1950f817d39a425134e254,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1038be0f6076f6cad62c595b27f3cfd98459c8cb35b6a6e90c6b673fad8e174,PodSandboxId:a40d1bbcc2adfbe1ac233ca4
ad30f4a34b6db12b8adb16beda8e5b77f887f4b5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718014943351427312,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 672548f328c46b786476290618e6a09f,},Annotations:map[string]string{io.kubernetes.container.hash: 70606478,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e4c6832ab8029c82de1b8e68e8894ee49e06552c2cb431ccc85768db866a227,PodSandboxId:850f971a165dae1a6d3908d49d28dbc88bfbdec1b
d4c5b831b0a5c02a4c4a360,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718014943335262502,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-021732,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56e8d90f3cb5861300be12c4a927a655,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=801f6456-2a27-4a52-9634-bc50758bbafc name=/runtime.v1.RuntimeService/ListC
ontainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dc22745643d68       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                 3 minutes ago       Running             hello-world-app           0                   7e193706ef909       hello-world-app-86c47465fc-d88fw
	4ac27835e9bc3       docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa                         5 minutes ago       Running             nginx                     0                   eff348790a47f       nginx
	e5faa2d32e1d9       ghcr.io/headlamp-k8s/headlamp@sha256:6dec009152279527b62e3fac947a2e40f6f99bff29259974b995f0606a9213e5                   6 minutes ago       Running             headlamp                  0                   b5f07ed2ec364       headlamp-7fc69f7444-b726p
	2a040b0631871       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            6 minutes ago       Running             gcp-auth                  0                   64b5dd5d40e45       gcp-auth-5db96cd9b4-p48fw
	d19d323b52af8       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                         6 minutes ago       Running             yakd                      0                   21444c38a2d27       yakd-dashboard-5ddbf7d777-p8pv2
	949c1eb00eb5e       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        6 minutes ago       Running             local-path-provisioner    0                   dae785f5d0a0f       local-path-provisioner-8d985888d-68cv5
	f53b89d046de3       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   b1735eeeb6054       metrics-server-c59844bb4-5lbmz
	80d4218e2abaf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   7c25142931680       storage-provisioner
	12d51f47f6cff       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        7 minutes ago       Running             coredns                   0                   a6bb9746ad354       coredns-7db6d8ff4d-rx46l
	8854f803d622f       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                                        7 minutes ago       Running             kube-proxy                0                   c243608ad14ca       kube-proxy-7846w
	b88fdec6d7662       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        8 minutes ago       Running             etcd                      0                   61b23db931e05       etcd-addons-021732
	b1038be0f6076       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                                        8 minutes ago       Running             kube-apiserver            0                   a40d1bbcc2adf       kube-apiserver-addons-021732
	3f0d847f6ad59       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                                        8 minutes ago       Running             kube-scheduler            0                   930d4bdf4e6e5       kube-scheduler-addons-021732
	1e4c6832ab802       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                                        8 minutes ago       Running             kube-controller-manager   0                   850f971a165da       kube-controller-manager-addons-021732
	
	
	==> coredns [12d51f47f6cffd10ed84592ac370dda69205489c5b11d84b22f2bb4811e54fb4] <==
	[INFO] 10.244.0.8:40069 - 46801 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000229563s
	[INFO] 10.244.0.8:54549 - 49709 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000100756s
	[INFO] 10.244.0.8:54549 - 21032 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000168558s
	[INFO] 10.244.0.8:46362 - 20939 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094505s
	[INFO] 10.244.0.8:46362 - 35028 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122818s
	[INFO] 10.244.0.8:53788 - 63092 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000136115s
	[INFO] 10.244.0.8:53788 - 45430 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108752s
	[INFO] 10.244.0.8:42874 - 34342 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070314s
	[INFO] 10.244.0.8:42874 - 49973 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000087667s
	[INFO] 10.244.0.8:53948 - 15404 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062492s
	[INFO] 10.244.0.8:53948 - 49705 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114457s
	[INFO] 10.244.0.8:56589 - 27141 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088277s
	[INFO] 10.244.0.8:56589 - 18439 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028869s
	[INFO] 10.244.0.8:46806 - 29265 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000054209s
	[INFO] 10.244.0.8:46806 - 21843 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108584s
	[INFO] 10.244.0.22:36587 - 53548 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00041039s
	[INFO] 10.244.0.22:49493 - 23515 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000183696s
	[INFO] 10.244.0.22:37858 - 42634 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127859s
	[INFO] 10.244.0.22:38164 - 1990 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000088632s
	[INFO] 10.244.0.22:55093 - 18374 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000153452s
	[INFO] 10.244.0.22:33104 - 21697 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088884s
	[INFO] 10.244.0.22:33110 - 32901 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000760966s
	[INFO] 10.244.0.22:35001 - 54488 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001532762s
	[INFO] 10.244.0.25:38715 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000368649s
	[INFO] 10.244.0.25:44760 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000271927s
	
	
	==> describe nodes <==
	Name:               addons-021732
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-021732
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=addons-021732
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T10_22_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-021732
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:22:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-021732
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:30:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:27:34 +0000   Mon, 10 Jun 2024 10:22:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:27:34 +0000   Mon, 10 Jun 2024 10:22:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:27:34 +0000   Mon, 10 Jun 2024 10:22:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:27:34 +0000   Mon, 10 Jun 2024 10:22:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.244
	  Hostname:    addons-021732
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f47abb3d6e54cdd89e31e075ba7516b
	  System UUID:                2f47abb3-d6e5-4cdd-89e3-1e075ba7516b
	  Boot ID:                    fe81519d-bfc4-45c9-a1b7-f84e0a5c322a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                      CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                      ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-d88fw          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  default                     nginx                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  gcp-auth                    gcp-auth-5db96cd9b4-p48fw                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  headlamp                    headlamp-7fc69f7444-b726p                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m11s
	  kube-system                 coredns-7db6d8ff4d-rx46l                  100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     7m42s
	  kube-system                 etcd-addons-021732                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-apiserver-addons-021732              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-controller-manager-addons-021732     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 kube-proxy-7846w                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m42s
	  kube-system                 kube-scheduler-addons-021732              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kube-system                 metrics-server-c59844bb4-5lbmz            100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         7m37s
	  kube-system                 storage-provisioner                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  local-path-storage          local-path-provisioner-8d985888d-68cv5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m37s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-p8pv2           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     7m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m40s  kube-proxy       
	  Normal  Starting                 7m56s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m56s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m56s  kubelet          Node addons-021732 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m56s  kubelet          Node addons-021732 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m56s  kubelet          Node addons-021732 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m55s  kubelet          Node addons-021732 status is now: NodeReady
	  Normal  RegisteredNode           7m43s  node-controller  Node addons-021732 event: Registered Node addons-021732 in Controller
	
	
	==> dmesg <==
	[  +5.136093] kauditd_printk_skb: 115 callbacks suppressed
	[  +5.003401] kauditd_printk_skb: 140 callbacks suppressed
	[  +5.234725] kauditd_printk_skb: 56 callbacks suppressed
	[Jun10 10:23] kauditd_printk_skb: 7 callbacks suppressed
	[ +16.787619] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.817677] kauditd_printk_skb: 4 callbacks suppressed
	[ +17.418807] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.099575] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.058301] kauditd_printk_skb: 76 callbacks suppressed
	[Jun10 10:24] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.476449] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.163474] kauditd_printk_skb: 50 callbacks suppressed
	[  +5.006496] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.006368] kauditd_printk_skb: 32 callbacks suppressed
	[  +6.486687] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.027423] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.723339] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.040751] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.683552] kauditd_printk_skb: 23 callbacks suppressed
	[Jun10 10:25] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.043017] kauditd_printk_skb: 2 callbacks suppressed
	[ +24.063671] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.228544] kauditd_printk_skb: 33 callbacks suppressed
	[Jun10 10:26] kauditd_printk_skb: 6 callbacks suppressed
	[Jun10 10:27] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [b88fdec6d7662e6f142e2c4782941d1b014d725747ad82975d2a3af2d75fbbac] <==
	{"level":"warn","ts":"2024-06-10T10:23:51.458386Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"217.905865ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T10:23:51.458418Z","caller":"traceutil/trace.go:171","msg":"trace[111553454] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1021; }","duration":"217.952098ms","start":"2024-06-10T10:23:51.24046Z","end":"2024-06-10T10:23:51.458413Z","steps":["trace[111553454] 'agreement among raft nodes before linearized reading'  (duration: 217.910523ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:23:58.640739Z","caller":"traceutil/trace.go:171","msg":"trace[4127607] linearizableReadLoop","detail":"{readStateIndex:1114; appliedIndex:1113; }","duration":"353.373488ms","start":"2024-06-10T10:23:58.287294Z","end":"2024-06-10T10:23:58.640667Z","steps":["trace[4127607] 'read index received'  (duration: 353.228665ms)","trace[4127607] 'applied index is now lower than readState.Index'  (duration: 144.396µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T10:23:58.640972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"353.697976ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-5lbmz.17d79d8aeb5d3716\" ","response":"range_response_count:1 size:813"}
	{"level":"info","ts":"2024-06-10T10:23:58.641016Z","caller":"traceutil/trace.go:171","msg":"trace[1238059756] range","detail":"{range_begin:/registry/events/kube-system/metrics-server-c59844bb4-5lbmz.17d79d8aeb5d3716; range_end:; response_count:1; response_revision:1080; }","duration":"353.773314ms","start":"2024-06-10T10:23:58.28723Z","end":"2024-06-10T10:23:58.641004Z","steps":["trace[1238059756] 'agreement among raft nodes before linearized reading'  (duration: 353.627312ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:23:58.641039Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:23:58.287216Z","time spent":"353.81838ms","remote":"127.0.0.1:58184","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":1,"response size":836,"request content":"key:\"/registry/events/kube-system/metrics-server-c59844bb4-5lbmz.17d79d8aeb5d3716\" "}
	{"level":"warn","ts":"2024-06-10T10:23:58.641135Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.387541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85556"}
	{"level":"info","ts":"2024-06-10T10:23:58.64121Z","caller":"traceutil/trace.go:171","msg":"trace[586710287] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1080; }","duration":"284.438087ms","start":"2024-06-10T10:23:58.356716Z","end":"2024-06-10T10:23:58.641154Z","steps":["trace[586710287] 'agreement among raft nodes before linearized reading'  (duration: 284.287748ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:23:58.641279Z","caller":"traceutil/trace.go:171","msg":"trace[1788749010] transaction","detail":"{read_only:false; response_revision:1080; number_of_response:1; }","duration":"372.800219ms","start":"2024-06-10T10:23:58.268467Z","end":"2024-06-10T10:23:58.641268Z","steps":["trace[1788749010] 'process raft request'  (duration: 372.094944ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:23:58.641337Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:23:58.268451Z","time spent":"372.847229ms","remote":"127.0.0.1:58282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1065 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-06-10T10:23:58.641404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.037911ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/yakd-dashboard-5ddbf7d777-p8pv2\" ","response":"range_response_count:1 size:4502"}
	{"level":"info","ts":"2024-06-10T10:23:58.641426Z","caller":"traceutil/trace.go:171","msg":"trace[1900020226] range","detail":"{range_begin:/registry/pods/yakd-dashboard/yakd-dashboard-5ddbf7d777-p8pv2; range_end:; response_count:1; response_revision:1080; }","duration":"176.081274ms","start":"2024-06-10T10:23:58.465338Z","end":"2024-06-10T10:23:58.641419Z","steps":["trace[1900020226] 'agreement among raft nodes before linearized reading'  (duration: 176.025316ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:23:58.641527Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"258.68431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
	{"level":"info","ts":"2024-06-10T10:23:58.641541Z","caller":"traceutil/trace.go:171","msg":"trace[1091279413] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1080; }","duration":"258.717387ms","start":"2024-06-10T10:23:58.382819Z","end":"2024-06-10T10:23:58.641537Z","steps":["trace[1091279413] 'agreement among raft nodes before linearized reading'  (duration: 258.668932ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:24:13.18708Z","caller":"traceutil/trace.go:171","msg":"trace[260758174] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"198.938207ms","start":"2024-06-10T10:24:12.988118Z","end":"2024-06-10T10:24:13.187057Z","steps":["trace[260758174] 'process raft request'  (duration: 198.721823ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:24:18.352702Z","caller":"traceutil/trace.go:171","msg":"trace[351633498] linearizableReadLoop","detail":"{readStateIndex:1252; appliedIndex:1251; }","duration":"111.959486ms","start":"2024-06-10T10:24:18.240716Z","end":"2024-06-10T10:24:18.352676Z","steps":["trace[351633498] 'read index received'  (duration: 111.584562ms)","trace[351633498] 'applied index is now lower than readState.Index'  (duration: 374.168µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T10:24:18.352922Z","caller":"traceutil/trace.go:171","msg":"trace[641990916] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"155.414016ms","start":"2024-06-10T10:24:18.197499Z","end":"2024-06-10T10:24:18.352913Z","steps":["trace[641990916] 'process raft request'  (duration: 154.998039ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:24:18.353602Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.828137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T10:24:18.353694Z","caller":"traceutil/trace.go:171","msg":"trace[1273084539] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1214; }","duration":"112.994761ms","start":"2024-06-10T10:24:18.24069Z","end":"2024-06-10T10:24:18.353685Z","steps":["trace[1273084539] 'agreement among raft nodes before linearized reading'  (duration: 112.835385ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:24:38.715488Z","caller":"traceutil/trace.go:171","msg":"trace[1501931927] linearizableReadLoop","detail":"{readStateIndex:1436; appliedIndex:1435; }","duration":"100.920377ms","start":"2024-06-10T10:24:38.614555Z","end":"2024-06-10T10:24:38.715475Z","steps":["trace[1501931927] 'read index received'  (duration: 100.797798ms)","trace[1501931927] 'applied index is now lower than readState.Index'  (duration: 122.175µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T10:24:38.715679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.127584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-06-10T10:24:38.715702Z","caller":"traceutil/trace.go:171","msg":"trace[1480646814] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1387; }","duration":"101.195474ms","start":"2024-06-10T10:24:38.6145Z","end":"2024-06-10T10:24:38.715696Z","steps":["trace[1480646814] 'agreement among raft nodes before linearized reading'  (duration: 101.0418ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:24:38.715987Z","caller":"traceutil/trace.go:171","msg":"trace[1456422452] transaction","detail":"{read_only:false; response_revision:1387; number_of_response:1; }","duration":"322.914736ms","start":"2024-06-10T10:24:38.393052Z","end":"2024-06-10T10:24:38.715967Z","steps":["trace[1456422452] 'process raft request'  (duration: 322.346736ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:24:38.716157Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T10:24:38.393037Z","time spent":"322.984558ms","remote":"127.0.0.1:58396","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1359 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-06-10T10:25:19.444156Z","caller":"traceutil/trace.go:171","msg":"trace[1738130015] transaction","detail":"{read_only:false; response_revision:1528; number_of_response:1; }","duration":"115.328245ms","start":"2024-06-10T10:25:19.328808Z","end":"2024-06-10T10:25:19.444136Z","steps":["trace[1738130015] 'process raft request'  (duration: 115.175735ms)"],"step_count":1}
	
	
	==> gcp-auth [2a040b0631871c5631fa7c1e5e37c49b6b4f9b576d1bbfe02db04511ebf3231a] <==
	2024/06/10 10:24:07 GCP Auth Webhook started!
	2024/06/10 10:24:13 Ready to marshal response ...
	2024/06/10 10:24:13 Ready to write response ...
	2024/06/10 10:24:13 Ready to marshal response ...
	2024/06/10 10:24:13 Ready to write response ...
	2024/06/10 10:24:13 Ready to marshal response ...
	2024/06/10 10:24:13 Ready to write response ...
	2024/06/10 10:24:17 Ready to marshal response ...
	2024/06/10 10:24:17 Ready to write response ...
	2024/06/10 10:24:23 Ready to marshal response ...
	2024/06/10 10:24:23 Ready to write response ...
	2024/06/10 10:24:36 Ready to marshal response ...
	2024/06/10 10:24:36 Ready to write response ...
	2024/06/10 10:24:42 Ready to marshal response ...
	2024/06/10 10:24:42 Ready to write response ...
	2024/06/10 10:24:42 Ready to marshal response ...
	2024/06/10 10:24:42 Ready to write response ...
	2024/06/10 10:24:53 Ready to marshal response ...
	2024/06/10 10:24:53 Ready to write response ...
	2024/06/10 10:25:11 Ready to marshal response ...
	2024/06/10 10:25:11 Ready to write response ...
	2024/06/10 10:25:34 Ready to marshal response ...
	2024/06/10 10:25:34 Ready to write response ...
	2024/06/10 10:26:57 Ready to marshal response ...
	2024/06/10 10:26:57 Ready to write response ...
	
	
	==> kernel <==
	 10:30:24 up 8 min,  0 users,  load average: 0.09, 0.75, 0.58
	Linux addons-021732 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b1038be0f6076f6cad62c595b27f3cfd98459c8cb35b6a6e90c6b673fad8e174] <==
	I0610 10:24:29.321657       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 10:24:29.322306       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 10:24:29.322330       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 10:24:29.322792       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 10:24:33.334507       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 10:24:33.334559       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0610 10:24:33.334774       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.136.138:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.136.138:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	I0610 10:24:33.342607       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0610 10:24:36.686816       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0610 10:24:36.875868       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.36.41"}
	I0610 10:25:26.795742       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0610 10:25:52.081892       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 10:25:52.082045       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 10:25:52.152777       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 10:25:52.152896       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 10:25:52.197518       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 10:25:52.197565       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0610 10:25:52.247122       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0610 10:25:52.247206       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0610 10:25:53.169875       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0610 10:25:53.248121       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0610 10:25:53.262952       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0610 10:26:57.976339       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.133.237"}
	E0610 10:27:00.916550       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [1e4c6832ab8029c82de1b8e68e8894ee49e06552c2cb431ccc85768db866a227] <==
	W0610 10:28:16.989273       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:28:16.989347       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:28:29.013062       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:28:29.013124       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:28:48.400865       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:28:48.401072       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:28:48.486113       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:28:48.486258       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:28:50.858558       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:28:50.858605       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:29:18.347813       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:29:18.347976       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:29:26.703602       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:29:26.703749       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:29:36.204147       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:29:36.204417       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:29:39.938500       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:29:39.938651       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:30:09.589400       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:09.589480       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:30:12.108380       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:12.108438       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0610 10:30:13.195422       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0610 10:30:13.195484       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0610 10:30:23.347961       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="13.786µs"
	
	
	==> kube-proxy [8854f803d622f6fd0c7bc120aed1cbbe06fc982cea0d1ba840b2ce765d2bbb8a] <==
	I0610 10:22:44.383213       1 server_linux.go:69] "Using iptables proxy"
	I0610 10:22:44.413964       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.244"]
	I0610 10:22:44.480317       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 10:22:44.480364       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 10:22:44.480379       1 server_linux.go:165] "Using iptables Proxier"
	I0610 10:22:44.483590       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 10:22:44.483836       1 server.go:872] "Version info" version="v1.30.1"
	I0610 10:22:44.483863       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:22:44.484874       1 config.go:192] "Starting service config controller"
	I0610 10:22:44.484883       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 10:22:44.484909       1 config.go:101] "Starting endpoint slice config controller"
	I0610 10:22:44.484913       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 10:22:44.490872       1 config.go:319] "Starting node config controller"
	I0610 10:22:44.490896       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 10:22:44.585703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 10:22:44.585785       1 shared_informer.go:320] Caches are synced for service config
	I0610 10:22:44.591142       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3f0d847f6ad591ffc2d8685f17d719d307927d57b03dac385bc80de1cd722f69] <==
	W0610 10:22:25.955458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 10:22:25.955492       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 10:22:26.784539       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 10:22:26.784656       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 10:22:26.965131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 10:22:26.965193       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 10:22:26.978741       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 10:22:26.978792       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 10:22:26.989317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 10:22:26.989360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 10:22:27.003398       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 10:22:27.004045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 10:22:27.062033       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 10:22:27.062080       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 10:22:27.062286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 10:22:27.062310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 10:22:27.081235       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 10:22:27.081377       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 10:22:27.181084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 10:22:27.181202       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 10:22:27.181622       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 10:22:27.181697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 10:22:27.409810       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 10:22:27.409855       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 10:22:29.648999       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 10:27:04 addons-021732 kubelet[1270]: I0610 10:27:04.786277    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab"} err="failed to get container status \"2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab\": rpc error: code = NotFound desc = could not find container \"2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab\": container with ID starting with 2ff4e2bfacd0a03d95095ad64b8a4664dc28f59e7e882273549c18faf92fc8ab not found: ID does not exist"
	Jun 10 10:27:28 addons-021732 kubelet[1270]: E0610 10:27:28.745609    1270 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:27:28 addons-021732 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:27:28 addons-021732 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:27:28 addons-021732 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:27:28 addons-021732 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:27:29 addons-021732 kubelet[1270]: I0610 10:27:29.202097    1270 scope.go:117] "RemoveContainer" containerID="e95bd0d26e99f7a5090ed919f987675e29905c7526b19ef6f6659706a74e16c0"
	Jun 10 10:27:29 addons-021732 kubelet[1270]: I0610 10:27:29.216348    1270 scope.go:117] "RemoveContainer" containerID="c2afd05933710d41bfef6803fb1ce14a4dab8e99f9da9efa653bf92cabc5f341"
	Jun 10 10:28:28 addons-021732 kubelet[1270]: E0610 10:28:28.747887    1270 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:28:28 addons-021732 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:28:28 addons-021732 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:28:28 addons-021732 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:28:28 addons-021732 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:29:28 addons-021732 kubelet[1270]: E0610 10:29:28.746898    1270 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:29:28 addons-021732 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:29:28 addons-021732 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:29:28 addons-021732 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:29:28 addons-021732 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:30:23 addons-021732 kubelet[1270]: I0610 10:30:23.375662    1270 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-d88fw" podStartSLOduration=203.364083507 podStartE2EDuration="3m26.37562007s" podCreationTimestamp="2024-06-10 10:26:57 +0000 UTC" firstStartedPulling="2024-06-10 10:26:58.386546555 +0000 UTC m=+269.786977256" lastFinishedPulling="2024-06-10 10:27:01.398083121 +0000 UTC m=+272.798513819" observedRunningTime="2024-06-10 10:27:01.770336722 +0000 UTC m=+273.170767440" watchObservedRunningTime="2024-06-10 10:30:23.37562007 +0000 UTC m=+474.776050788"
	Jun 10 10:30:24 addons-021732 kubelet[1270]: I0610 10:30:24.855395    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cj2s\" (UniqueName: \"kubernetes.io/projected/9560fdac-7849-4123-9b3f-b4042539052c-kube-api-access-9cj2s\") pod \"9560fdac-7849-4123-9b3f-b4042539052c\" (UID: \"9560fdac-7849-4123-9b3f-b4042539052c\") "
	Jun 10 10:30:24 addons-021732 kubelet[1270]: I0610 10:30:24.855442    1270 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9560fdac-7849-4123-9b3f-b4042539052c-tmp-dir\") pod \"9560fdac-7849-4123-9b3f-b4042539052c\" (UID: \"9560fdac-7849-4123-9b3f-b4042539052c\") "
	Jun 10 10:30:24 addons-021732 kubelet[1270]: I0610 10:30:24.855826    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9560fdac-7849-4123-9b3f-b4042539052c-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "9560fdac-7849-4123-9b3f-b4042539052c" (UID: "9560fdac-7849-4123-9b3f-b4042539052c"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jun 10 10:30:24 addons-021732 kubelet[1270]: I0610 10:30:24.864377    1270 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9560fdac-7849-4123-9b3f-b4042539052c-kube-api-access-9cj2s" (OuterVolumeSpecName: "kube-api-access-9cj2s") pod "9560fdac-7849-4123-9b3f-b4042539052c" (UID: "9560fdac-7849-4123-9b3f-b4042539052c"). InnerVolumeSpecName "kube-api-access-9cj2s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 10 10:30:24 addons-021732 kubelet[1270]: I0610 10:30:24.956225    1270 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9560fdac-7849-4123-9b3f-b4042539052c-tmp-dir\") on node \"addons-021732\" DevicePath \"\""
	Jun 10 10:30:24 addons-021732 kubelet[1270]: I0610 10:30:24.956257    1270 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9cj2s\" (UniqueName: \"kubernetes.io/projected/9560fdac-7849-4123-9b3f-b4042539052c-kube-api-access-9cj2s\") on node \"addons-021732\" DevicePath \"\""
	
	
	==> storage-provisioner [80d4218e2abaf26e52a0b15b1daec5f8d45a248f3c62521a5bd620e6cb39ac51] <==
	I0610 10:22:50.581254       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 10:22:50.661037       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 10:22:50.661142       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 10:22:50.677924       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 10:22:50.678303       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-021732_e4916fdc-bb70-4fc6-a576-6defff5c5bc4!
	I0610 10:22:50.684718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a6232844-0476-41f5-b9df-66a2659aee82", APIVersion:"v1", ResourceVersion:"708", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-021732_e4916fdc-bb70-4fc6-a576-6defff5c5bc4 became leader
	I0610 10:22:50.779100       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-021732_e4916fdc-bb70-4fc6-a576-6defff5c5bc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-021732 -n addons-021732
helpers_test.go:261: (dbg) Run:  kubectl --context addons-021732 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-c59844bb4-5lbmz
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-021732 describe pod metrics-server-c59844bb4-5lbmz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-021732 describe pod metrics-server-c59844bb4-5lbmz: exit status 1 (67.304852ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-c59844bb4-5lbmz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-021732 describe pod metrics-server-c59844bb4-5lbmz: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (358.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-021732
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-021732: exit status 82 (2m0.465977063s)

                                                
                                                
-- stdout --
	* Stopping node "addons-021732"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-021732" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-021732
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-021732: exit status 11 (21.51839004s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.244:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-021732" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-021732
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-021732: exit status 11 (6.14311697s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.244:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-021732" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-021732
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-021732: exit status 11 (6.144500559s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.244:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-021732" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 node stop m02 -v=7 --alsologtostderr
E0610 10:42:38.875717   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:43:19.836124   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:44:12.453525   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.474886111s)

                                                
                                                
-- stdout --
	* Stopping node "ha-565925-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:42:29.782955   25862 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:42:29.783352   25862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:42:29.783366   25862 out.go:304] Setting ErrFile to fd 2...
	I0610 10:42:29.783373   25862 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:42:29.783864   25862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:42:29.784153   25862 mustload.go:65] Loading cluster: ha-565925
	I0610 10:42:29.784519   25862 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:42:29.784537   25862 stop.go:39] StopHost: ha-565925-m02
	I0610 10:42:29.784964   25862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:42:29.785025   25862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:42:29.800489   25862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33219
	I0610 10:42:29.800939   25862 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:42:29.801580   25862 main.go:141] libmachine: Using API Version  1
	I0610 10:42:29.801603   25862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:42:29.801938   25862 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:42:29.804314   25862 out.go:177] * Stopping node "ha-565925-m02"  ...
	I0610 10:42:29.805660   25862 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0610 10:42:29.805694   25862 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:42:29.805948   25862 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0610 10:42:29.805979   25862 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:42:29.809349   25862 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:42:29.809852   25862 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:42:29.809874   25862 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:42:29.810119   25862 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:42:29.810283   25862 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:42:29.810438   25862 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:42:29.810614   25862 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	I0610 10:42:29.897301   25862 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0610 10:42:29.950908   25862 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0610 10:42:30.004906   25862 main.go:141] libmachine: Stopping "ha-565925-m02"...
	I0610 10:42:30.004934   25862 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:42:30.006595   25862 main.go:141] libmachine: (ha-565925-m02) Calling .Stop
	I0610 10:42:30.010265   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 0/120
	I0610 10:42:31.011676   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 1/120
	I0610 10:42:32.013290   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 2/120
	I0610 10:42:33.015418   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 3/120
	I0610 10:42:34.016624   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 4/120
	I0610 10:42:35.019186   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 5/120
	I0610 10:42:36.020829   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 6/120
	I0610 10:42:37.022220   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 7/120
	I0610 10:42:38.023667   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 8/120
	I0610 10:42:39.025183   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 9/120
	I0610 10:42:40.026746   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 10/120
	I0610 10:42:41.028358   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 11/120
	I0610 10:42:42.029924   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 12/120
	I0610 10:42:43.031418   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 13/120
	I0610 10:42:44.033693   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 14/120
	I0610 10:42:45.035810   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 15/120
	I0610 10:42:46.037402   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 16/120
	I0610 10:42:47.039520   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 17/120
	I0610 10:42:48.041005   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 18/120
	I0610 10:42:49.042374   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 19/120
	I0610 10:42:50.044451   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 20/120
	I0610 10:42:51.046387   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 21/120
	I0610 10:42:52.047996   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 22/120
	I0610 10:42:53.049583   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 23/120
	I0610 10:42:54.051619   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 24/120
	I0610 10:42:55.053912   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 25/120
	I0610 10:42:56.055529   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 26/120
	I0610 10:42:57.057129   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 27/120
	I0610 10:42:58.059451   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 28/120
	I0610 10:42:59.061059   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 29/120
	I0610 10:43:00.063096   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 30/120
	I0610 10:43:01.064387   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 31/120
	I0610 10:43:02.065871   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 32/120
	I0610 10:43:03.067575   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 33/120
	I0610 10:43:04.069390   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 34/120
	I0610 10:43:05.071600   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 35/120
	I0610 10:43:06.073122   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 36/120
	I0610 10:43:07.074733   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 37/120
	I0610 10:43:08.076403   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 38/120
	I0610 10:43:09.077807   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 39/120
	I0610 10:43:10.080184   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 40/120
	I0610 10:43:11.081689   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 41/120
	I0610 10:43:12.083612   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 42/120
	I0610 10:43:13.085986   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 43/120
	I0610 10:43:14.087137   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 44/120
	I0610 10:43:15.089463   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 45/120
	I0610 10:43:16.090929   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 46/120
	I0610 10:43:17.092369   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 47/120
	I0610 10:43:18.093801   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 48/120
	I0610 10:43:19.095503   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 49/120
	I0610 10:43:20.097687   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 50/120
	I0610 10:43:21.099056   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 51/120
	I0610 10:43:22.100705   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 52/120
	I0610 10:43:23.102678   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 53/120
	I0610 10:43:24.104084   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 54/120
	I0610 10:43:25.106462   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 55/120
	I0610 10:43:26.108709   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 56/120
	I0610 10:43:27.110119   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 57/120
	I0610 10:43:28.111599   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 58/120
	I0610 10:43:29.113076   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 59/120
	I0610 10:43:30.114622   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 60/120
	I0610 10:43:31.115968   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 61/120
	I0610 10:43:32.117517   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 62/120
	I0610 10:43:33.119736   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 63/120
	I0610 10:43:34.121298   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 64/120
	I0610 10:43:35.123420   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 65/120
	I0610 10:43:36.124927   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 66/120
	I0610 10:43:37.127123   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 67/120
	I0610 10:43:38.128799   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 68/120
	I0610 10:43:39.130528   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 69/120
	I0610 10:43:40.132748   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 70/120
	I0610 10:43:41.134296   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 71/120
	I0610 10:43:42.136237   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 72/120
	I0610 10:43:43.137702   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 73/120
	I0610 10:43:44.139566   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 74/120
	I0610 10:43:45.141622   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 75/120
	I0610 10:43:46.143562   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 76/120
	I0610 10:43:47.144992   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 77/120
	I0610 10:43:48.146434   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 78/120
	I0610 10:43:49.147945   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 79/120
	I0610 10:43:50.150076   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 80/120
	I0610 10:43:51.151384   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 81/120
	I0610 10:43:52.152785   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 82/120
	I0610 10:43:53.154188   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 83/120
	I0610 10:43:54.155496   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 84/120
	I0610 10:43:55.157591   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 85/120
	I0610 10:43:56.159724   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 86/120
	I0610 10:43:57.161034   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 87/120
	I0610 10:43:58.162214   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 88/120
	I0610 10:43:59.164546   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 89/120
	I0610 10:44:00.166399   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 90/120
	I0610 10:44:01.167836   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 91/120
	I0610 10:44:02.169172   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 92/120
	I0610 10:44:03.171748   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 93/120
	I0610 10:44:04.174040   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 94/120
	I0610 10:44:05.175940   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 95/120
	I0610 10:44:06.177415   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 96/120
	I0610 10:44:07.178759   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 97/120
	I0610 10:44:08.180297   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 98/120
	I0610 10:44:09.182072   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 99/120
	I0610 10:44:10.183717   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 100/120
	I0610 10:44:11.185685   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 101/120
	I0610 10:44:12.187173   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 102/120
	I0610 10:44:13.188571   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 103/120
	I0610 10:44:14.190180   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 104/120
	I0610 10:44:15.191904   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 105/120
	I0610 10:44:16.193378   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 106/120
	I0610 10:44:17.194792   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 107/120
	I0610 10:44:18.196928   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 108/120
	I0610 10:44:19.198317   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 109/120
	I0610 10:44:20.200589   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 110/120
	I0610 10:44:21.202273   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 111/120
	I0610 10:44:22.203909   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 112/120
	I0610 10:44:23.205067   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 113/120
	I0610 10:44:24.206622   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 114/120
	I0610 10:44:25.208514   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 115/120
	I0610 10:44:26.210280   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 116/120
	I0610 10:44:27.211749   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 117/120
	I0610 10:44:28.212970   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 118/120
	I0610 10:44:29.214358   25862 main.go:141] libmachine: (ha-565925-m02) Waiting for machine to stop 119/120
	I0610 10:44:30.215254   25862 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0610 10:44:30.215571   25862 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-565925 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
E0610 10:44:41.757055   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr: exit status 3 (19.104706723s)

                                                
                                                
-- stdout --
	ha-565925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565925-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:44:30.261587   26318 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:44:30.261878   26318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:44:30.261888   26318 out.go:304] Setting ErrFile to fd 2...
	I0610 10:44:30.261892   26318 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:44:30.262094   26318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:44:30.262333   26318 out.go:298] Setting JSON to false
	I0610 10:44:30.262368   26318 mustload.go:65] Loading cluster: ha-565925
	I0610 10:44:30.262526   26318 notify.go:220] Checking for updates...
	I0610 10:44:30.262917   26318 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:44:30.262950   26318 status.go:255] checking status of ha-565925 ...
	I0610 10:44:30.263519   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:30.263597   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:30.280418   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39379
	I0610 10:44:30.280982   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:30.281684   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:30.281705   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:30.282054   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:30.282218   26318 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:44:30.283846   26318 status.go:330] ha-565925 host status = "Running" (err=<nil>)
	I0610 10:44:30.283859   26318 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:44:30.284132   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:30.284183   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:30.299076   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I0610 10:44:30.299617   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:30.300126   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:30.300146   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:30.300476   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:30.300675   26318 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:44:30.303403   26318 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:30.304029   26318 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:44:30.304065   26318 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:30.304287   26318 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:44:30.304808   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:30.304878   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:30.320189   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I0610 10:44:30.320604   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:30.321123   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:30.321149   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:30.321424   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:30.321591   26318 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:44:30.321795   26318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:44:30.321826   26318 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:44:30.326653   26318 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:44:30.326684   26318 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:30.326720   26318 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:44:30.326738   26318 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:30.326852   26318 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:44:30.327052   26318 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:44:30.327213   26318 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:44:30.414510   26318 ssh_runner.go:195] Run: systemctl --version
	I0610 10:44:30.421877   26318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:44:30.444659   26318 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:44:30.444693   26318 api_server.go:166] Checking apiserver status ...
	I0610 10:44:30.444727   26318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:44:30.469853   26318 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0610 10:44:30.482508   26318 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:44:30.482552   26318 ssh_runner.go:195] Run: ls
	I0610 10:44:30.488525   26318 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:44:30.493004   26318 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:44:30.493031   26318 status.go:422] ha-565925 apiserver status = Running (err=<nil>)
	I0610 10:44:30.493047   26318 status.go:257] ha-565925 status: &{Name:ha-565925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:44:30.493065   26318 status.go:255] checking status of ha-565925-m02 ...
	I0610 10:44:30.493334   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:30.493368   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:30.508581   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38755
	I0610 10:44:30.509023   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:30.509506   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:30.509539   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:30.509832   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:30.510054   26318 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:44:30.511626   26318 status.go:330] ha-565925-m02 host status = "Running" (err=<nil>)
	I0610 10:44:30.511642   26318 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:44:30.511941   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:30.511977   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:30.527362   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38165
	I0610 10:44:30.527836   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:30.528370   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:30.528393   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:30.528731   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:30.528911   26318 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:44:30.531715   26318 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:30.532131   26318 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:44:30.532159   26318 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:30.532314   26318 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:44:30.532605   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:30.532638   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:30.549258   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34307
	I0610 10:44:30.549725   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:30.550339   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:30.550366   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:30.550660   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:30.550807   26318 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:44:30.550962   26318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:44:30.550986   26318 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:44:30.554065   26318 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:30.554476   26318 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:44:30.554533   26318 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:30.554652   26318 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:44:30.554808   26318 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:44:30.554969   26318 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:44:30.555124   26318 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	W0610 10:44:48.961133   26318 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.230:22: connect: no route to host
	W0610 10:44:48.961246   26318 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	E0610 10:44:48.961260   26318 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:44:48.961266   26318 status.go:257] ha-565925-m02 status: &{Name:ha-565925-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0610 10:44:48.961283   26318 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:44:48.961291   26318 status.go:255] checking status of ha-565925-m03 ...
	I0610 10:44:48.961575   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:48.961601   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:48.975964   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46795
	I0610 10:44:48.976384   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:48.976835   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:48.976857   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:48.977161   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:48.977371   26318 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:44:48.978937   26318 status.go:330] ha-565925-m03 host status = "Running" (err=<nil>)
	I0610 10:44:48.978956   26318 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:44:48.979278   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:48.979320   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:48.994519   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45495
	I0610 10:44:48.994925   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:48.995406   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:48.995431   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:48.995737   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:48.995928   26318 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:44:48.998648   26318 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:44:48.999139   26318 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:44:48.999175   26318 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:44:48.999295   26318 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:44:48.999565   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:48.999598   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:49.013767   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35745
	I0610 10:44:49.014255   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:49.014711   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:49.014733   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:49.015033   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:49.015260   26318 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:44:49.015478   26318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:44:49.015497   26318 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:44:49.018144   26318 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:44:49.018664   26318 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:44:49.018691   26318 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:44:49.018829   26318 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:44:49.019033   26318 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:44:49.019193   26318 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:44:49.019338   26318 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:44:49.101041   26318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:44:49.118491   26318 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:44:49.118518   26318 api_server.go:166] Checking apiserver status ...
	I0610 10:44:49.118555   26318 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:44:49.138897   26318 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	W0610 10:44:49.148470   26318 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:44:49.148519   26318 ssh_runner.go:195] Run: ls
	I0610 10:44:49.152614   26318 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:44:49.156597   26318 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:44:49.156622   26318 status.go:422] ha-565925-m03 apiserver status = Running (err=<nil>)
	I0610 10:44:49.156633   26318 status.go:257] ha-565925-m03 status: &{Name:ha-565925-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:44:49.156653   26318 status.go:255] checking status of ha-565925-m04 ...
	I0610 10:44:49.156924   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:49.156977   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:49.171458   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33947
	I0610 10:44:49.172007   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:49.172535   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:49.172563   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:49.172843   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:49.173020   26318 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:44:49.174540   26318 status.go:330] ha-565925-m04 host status = "Running" (err=<nil>)
	I0610 10:44:49.174554   26318 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:44:49.174831   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:49.174866   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:49.190299   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33783
	I0610 10:44:49.190714   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:49.191153   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:49.191174   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:49.191424   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:49.191590   26318 main.go:141] libmachine: (ha-565925-m04) Calling .GetIP
	I0610 10:44:49.194243   26318 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:44:49.194682   26318 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:44:49.194709   26318 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:44:49.194838   26318 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:44:49.195208   26318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:49.195233   26318 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:49.210729   26318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0610 10:44:49.211102   26318 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:49.211540   26318 main.go:141] libmachine: Using API Version  1
	I0610 10:44:49.211560   26318 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:49.211846   26318 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:49.212045   26318 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:44:49.212217   26318 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:44:49.212236   26318 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:44:49.214752   26318 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:44:49.215130   26318 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:44:49.215156   26318 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:44:49.215288   26318 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:44:49.215446   26318 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:44:49.215604   26318 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:44:49.215736   26318 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	I0610 10:44:49.302216   26318 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:44:49.319036   26318 status.go:257] ha-565925-m04 status: &{Name:ha-565925-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565925 -n ha-565925
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565925 logs -n 25: (1.340452005s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1107448961/001/cp-test_ha-565925-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925:/home/docker/cp-test_ha-565925-m03_ha-565925.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925 sudo cat                                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m02:/home/docker/cp-test_ha-565925-m03_ha-565925-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m02 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04:/home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m04 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp testdata/cp-test.txt                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1107448961/001/cp-test_ha-565925-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925:/home/docker/cp-test_ha-565925-m04_ha-565925.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925 sudo cat                                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m02:/home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m02 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03:/home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m03 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565925 node stop m02 -v=7                                                     | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:37:51
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:37:51.251761   21811 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:37:51.251853   21811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:37:51.251861   21811 out.go:304] Setting ErrFile to fd 2...
	I0610 10:37:51.251864   21811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:37:51.252062   21811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:37:51.252626   21811 out.go:298] Setting JSON to false
	I0610 10:37:51.253501   21811 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1212,"bootTime":1718014659,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:37:51.253561   21811 start.go:139] virtualization: kvm guest
	I0610 10:37:51.255741   21811 out.go:177] * [ha-565925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:37:51.257390   21811 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:37:51.257350   21811 notify.go:220] Checking for updates...
	I0610 10:37:51.258943   21811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:37:51.260269   21811 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:37:51.261624   21811 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:37:51.262918   21811 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 10:37:51.264223   21811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:37:51.265681   21811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:37:51.300203   21811 out.go:177] * Using the kvm2 driver based on user configuration
	I0610 10:37:51.301562   21811 start.go:297] selected driver: kvm2
	I0610 10:37:51.301578   21811 start.go:901] validating driver "kvm2" against <nil>
	I0610 10:37:51.301589   21811 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:37:51.302304   21811 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:37:51.302383   21811 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 10:37:51.317065   21811 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 10:37:51.317112   21811 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 10:37:51.317313   21811 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:37:51.317338   21811 cni.go:84] Creating CNI manager for ""
	I0610 10:37:51.317345   21811 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 10:37:51.317350   21811 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 10:37:51.317429   21811 start.go:340] cluster config:
	{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0610 10:37:51.317515   21811 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:37:51.319454   21811 out.go:177] * Starting "ha-565925" primary control-plane node in "ha-565925" cluster
	I0610 10:37:51.320880   21811 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:37:51.321040   21811 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 10:37:51.321071   21811 cache.go:56] Caching tarball of preloaded images
	I0610 10:37:51.321222   21811 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:37:51.321232   21811 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:37:51.322248   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:37:51.322286   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json: {Name:mk7c15934ae50915ca2e8e0e876fe86b3ff227de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:37:51.322436   21811 start.go:360] acquireMachinesLock for ha-565925: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:37:51.322465   21811 start.go:364] duration metric: took 15.95µs to acquireMachinesLock for "ha-565925"
	I0610 10:37:51.322481   21811 start.go:93] Provisioning new machine with config: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:37:51.322539   21811 start.go:125] createHost starting for "" (driver="kvm2")
	I0610 10:37:51.324589   21811 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:37:51.324708   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:37:51.324743   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:37:51.338690   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0610 10:37:51.339171   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:37:51.339791   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:37:51.339820   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:37:51.340154   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:37:51.340348   21811 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:37:51.340485   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:37:51.340650   21811 start.go:159] libmachine.API.Create for "ha-565925" (driver="kvm2")
	I0610 10:37:51.340679   21811 client.go:168] LocalClient.Create starting
	I0610 10:37:51.340707   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 10:37:51.340737   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:37:51.340753   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:37:51.340805   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 10:37:51.340830   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:37:51.340849   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:37:51.340874   21811 main.go:141] libmachine: Running pre-create checks...
	I0610 10:37:51.340886   21811 main.go:141] libmachine: (ha-565925) Calling .PreCreateCheck
	I0610 10:37:51.341201   21811 main.go:141] libmachine: (ha-565925) Calling .GetConfigRaw
	I0610 10:37:51.341623   21811 main.go:141] libmachine: Creating machine...
	I0610 10:37:51.341642   21811 main.go:141] libmachine: (ha-565925) Calling .Create
	I0610 10:37:51.341760   21811 main.go:141] libmachine: (ha-565925) Creating KVM machine...
	I0610 10:37:51.343096   21811 main.go:141] libmachine: (ha-565925) DBG | found existing default KVM network
	I0610 10:37:51.343904   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:51.343750   21834 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0610 10:37:51.343931   21811 main.go:141] libmachine: (ha-565925) DBG | created network xml: 
	I0610 10:37:51.343939   21811 main.go:141] libmachine: (ha-565925) DBG | <network>
	I0610 10:37:51.343945   21811 main.go:141] libmachine: (ha-565925) DBG |   <name>mk-ha-565925</name>
	I0610 10:37:51.343949   21811 main.go:141] libmachine: (ha-565925) DBG |   <dns enable='no'/>
	I0610 10:37:51.343955   21811 main.go:141] libmachine: (ha-565925) DBG |   
	I0610 10:37:51.343961   21811 main.go:141] libmachine: (ha-565925) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0610 10:37:51.343970   21811 main.go:141] libmachine: (ha-565925) DBG |     <dhcp>
	I0610 10:37:51.343976   21811 main.go:141] libmachine: (ha-565925) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0610 10:37:51.343984   21811 main.go:141] libmachine: (ha-565925) DBG |     </dhcp>
	I0610 10:37:51.343991   21811 main.go:141] libmachine: (ha-565925) DBG |   </ip>
	I0610 10:37:51.343999   21811 main.go:141] libmachine: (ha-565925) DBG |   
	I0610 10:37:51.344003   21811 main.go:141] libmachine: (ha-565925) DBG | </network>
	I0610 10:37:51.344012   21811 main.go:141] libmachine: (ha-565925) DBG | 
	I0610 10:37:51.349106   21811 main.go:141] libmachine: (ha-565925) DBG | trying to create private KVM network mk-ha-565925 192.168.39.0/24...
	I0610 10:37:51.417135   21811 main.go:141] libmachine: (ha-565925) DBG | private KVM network mk-ha-565925 192.168.39.0/24 created
	I0610 10:37:51.417176   21811 main.go:141] libmachine: (ha-565925) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925 ...
	I0610 10:37:51.417190   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:51.417100   21834 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:37:51.417208   21811 main.go:141] libmachine: (ha-565925) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 10:37:51.417255   21811 main.go:141] libmachine: (ha-565925) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 10:37:51.649309   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:51.649194   21834 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa...
	I0610 10:37:51.811611   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:51.811494   21834 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/ha-565925.rawdisk...
	I0610 10:37:51.811644   21811 main.go:141] libmachine: (ha-565925) DBG | Writing magic tar header
	I0610 10:37:51.811653   21811 main.go:141] libmachine: (ha-565925) DBG | Writing SSH key tar header
	I0610 10:37:51.811660   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:51.811622   21834 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925 ...
	I0610 10:37:51.811814   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925 (perms=drwx------)
	I0610 10:37:51.811851   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925
	I0610 10:37:51.811864   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 10:37:51.811881   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 10:37:51.811894   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 10:37:51.811905   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 10:37:51.811918   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 10:37:51.811937   21811 main.go:141] libmachine: (ha-565925) Creating domain...
	I0610 10:37:51.811955   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 10:37:51.811969   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:37:51.811978   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 10:37:51.812004   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 10:37:51.812019   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins
	I0610 10:37:51.812028   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home
	I0610 10:37:51.812038   21811 main.go:141] libmachine: (ha-565925) DBG | Skipping /home - not owner
	I0610 10:37:51.812917   21811 main.go:141] libmachine: (ha-565925) define libvirt domain using xml: 
	I0610 10:37:51.812937   21811 main.go:141] libmachine: (ha-565925) <domain type='kvm'>
	I0610 10:37:51.812965   21811 main.go:141] libmachine: (ha-565925)   <name>ha-565925</name>
	I0610 10:37:51.812977   21811 main.go:141] libmachine: (ha-565925)   <memory unit='MiB'>2200</memory>
	I0610 10:37:51.812985   21811 main.go:141] libmachine: (ha-565925)   <vcpu>2</vcpu>
	I0610 10:37:51.812991   21811 main.go:141] libmachine: (ha-565925)   <features>
	I0610 10:37:51.812999   21811 main.go:141] libmachine: (ha-565925)     <acpi/>
	I0610 10:37:51.813005   21811 main.go:141] libmachine: (ha-565925)     <apic/>
	I0610 10:37:51.813014   21811 main.go:141] libmachine: (ha-565925)     <pae/>
	I0610 10:37:51.813026   21811 main.go:141] libmachine: (ha-565925)     
	I0610 10:37:51.813038   21811 main.go:141] libmachine: (ha-565925)   </features>
	I0610 10:37:51.813045   21811 main.go:141] libmachine: (ha-565925)   <cpu mode='host-passthrough'>
	I0610 10:37:51.813053   21811 main.go:141] libmachine: (ha-565925)   
	I0610 10:37:51.813060   21811 main.go:141] libmachine: (ha-565925)   </cpu>
	I0610 10:37:51.813072   21811 main.go:141] libmachine: (ha-565925)   <os>
	I0610 10:37:51.813080   21811 main.go:141] libmachine: (ha-565925)     <type>hvm</type>
	I0610 10:37:51.813093   21811 main.go:141] libmachine: (ha-565925)     <boot dev='cdrom'/>
	I0610 10:37:51.813103   21811 main.go:141] libmachine: (ha-565925)     <boot dev='hd'/>
	I0610 10:37:51.813114   21811 main.go:141] libmachine: (ha-565925)     <bootmenu enable='no'/>
	I0610 10:37:51.813127   21811 main.go:141] libmachine: (ha-565925)   </os>
	I0610 10:37:51.813138   21811 main.go:141] libmachine: (ha-565925)   <devices>
	I0610 10:37:51.813147   21811 main.go:141] libmachine: (ha-565925)     <disk type='file' device='cdrom'>
	I0610 10:37:51.813165   21811 main.go:141] libmachine: (ha-565925)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/boot2docker.iso'/>
	I0610 10:37:51.813177   21811 main.go:141] libmachine: (ha-565925)       <target dev='hdc' bus='scsi'/>
	I0610 10:37:51.813189   21811 main.go:141] libmachine: (ha-565925)       <readonly/>
	I0610 10:37:51.813210   21811 main.go:141] libmachine: (ha-565925)     </disk>
	I0610 10:37:51.813224   21811 main.go:141] libmachine: (ha-565925)     <disk type='file' device='disk'>
	I0610 10:37:51.813237   21811 main.go:141] libmachine: (ha-565925)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 10:37:51.813254   21811 main.go:141] libmachine: (ha-565925)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/ha-565925.rawdisk'/>
	I0610 10:37:51.813265   21811 main.go:141] libmachine: (ha-565925)       <target dev='hda' bus='virtio'/>
	I0610 10:37:51.813277   21811 main.go:141] libmachine: (ha-565925)     </disk>
	I0610 10:37:51.813318   21811 main.go:141] libmachine: (ha-565925)     <interface type='network'>
	I0610 10:37:51.813342   21811 main.go:141] libmachine: (ha-565925)       <source network='mk-ha-565925'/>
	I0610 10:37:51.813353   21811 main.go:141] libmachine: (ha-565925)       <model type='virtio'/>
	I0610 10:37:51.813367   21811 main.go:141] libmachine: (ha-565925)     </interface>
	I0610 10:37:51.813380   21811 main.go:141] libmachine: (ha-565925)     <interface type='network'>
	I0610 10:37:51.813391   21811 main.go:141] libmachine: (ha-565925)       <source network='default'/>
	I0610 10:37:51.813402   21811 main.go:141] libmachine: (ha-565925)       <model type='virtio'/>
	I0610 10:37:51.813411   21811 main.go:141] libmachine: (ha-565925)     </interface>
	I0610 10:37:51.813424   21811 main.go:141] libmachine: (ha-565925)     <serial type='pty'>
	I0610 10:37:51.813437   21811 main.go:141] libmachine: (ha-565925)       <target port='0'/>
	I0610 10:37:51.813451   21811 main.go:141] libmachine: (ha-565925)     </serial>
	I0610 10:37:51.813460   21811 main.go:141] libmachine: (ha-565925)     <console type='pty'>
	I0610 10:37:51.813469   21811 main.go:141] libmachine: (ha-565925)       <target type='serial' port='0'/>
	I0610 10:37:51.813491   21811 main.go:141] libmachine: (ha-565925)     </console>
	I0610 10:37:51.813504   21811 main.go:141] libmachine: (ha-565925)     <rng model='virtio'>
	I0610 10:37:51.813519   21811 main.go:141] libmachine: (ha-565925)       <backend model='random'>/dev/random</backend>
	I0610 10:37:51.813532   21811 main.go:141] libmachine: (ha-565925)     </rng>
	I0610 10:37:51.813541   21811 main.go:141] libmachine: (ha-565925)     
	I0610 10:37:51.813551   21811 main.go:141] libmachine: (ha-565925)     
	I0610 10:37:51.813561   21811 main.go:141] libmachine: (ha-565925)   </devices>
	I0610 10:37:51.813572   21811 main.go:141] libmachine: (ha-565925) </domain>
	I0610 10:37:51.813581   21811 main.go:141] libmachine: (ha-565925) 
	I0610 10:37:51.817903   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:6a:77:ed in network default
	I0610 10:37:51.818489   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:51.818505   21811 main.go:141] libmachine: (ha-565925) Ensuring networks are active...
	I0610 10:37:51.819304   21811 main.go:141] libmachine: (ha-565925) Ensuring network default is active
	I0610 10:37:51.819598   21811 main.go:141] libmachine: (ha-565925) Ensuring network mk-ha-565925 is active
	I0610 10:37:51.820102   21811 main.go:141] libmachine: (ha-565925) Getting domain xml...
	I0610 10:37:51.820750   21811 main.go:141] libmachine: (ha-565925) Creating domain...
	I0610 10:37:53.008336   21811 main.go:141] libmachine: (ha-565925) Waiting to get IP...
	I0610 10:37:53.009359   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:53.009768   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:53.009802   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:53.009746   21834 retry.go:31] will retry after 246.064928ms: waiting for machine to come up
	I0610 10:37:53.257305   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:53.257789   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:53.257812   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:53.257724   21834 retry.go:31] will retry after 383.734399ms: waiting for machine to come up
	I0610 10:37:53.642985   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:53.643440   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:53.643486   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:53.643424   21834 retry.go:31] will retry after 335.386365ms: waiting for machine to come up
	I0610 10:37:53.979774   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:53.980152   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:53.980179   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:53.980114   21834 retry.go:31] will retry after 534.492321ms: waiting for machine to come up
	I0610 10:37:54.515753   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:54.516152   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:54.516183   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:54.516103   21834 retry.go:31] will retry after 497.370783ms: waiting for machine to come up
	I0610 10:37:55.014704   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:55.015039   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:55.015060   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:55.014999   21834 retry.go:31] will retry after 838.175864ms: waiting for machine to come up
	I0610 10:37:55.854337   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:55.854724   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:55.854754   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:55.854678   21834 retry.go:31] will retry after 801.114412ms: waiting for machine to come up
	I0610 10:37:56.657501   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:56.657898   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:56.657929   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:56.657844   21834 retry.go:31] will retry after 1.228462609s: waiting for machine to come up
	I0610 10:37:57.888227   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:57.888543   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:57.888566   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:57.888493   21834 retry.go:31] will retry after 1.223943325s: waiting for machine to come up
	I0610 10:37:59.113957   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:59.114450   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:59.114472   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:59.114403   21834 retry.go:31] will retry after 1.888368081s: waiting for machine to come up
	I0610 10:38:01.005452   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:01.005881   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:38:01.005908   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:38:01.005831   21834 retry.go:31] will retry after 2.682748595s: waiting for machine to come up
	I0610 10:38:03.691612   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:03.692037   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:38:03.692063   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:38:03.691999   21834 retry.go:31] will retry after 2.798658731s: waiting for machine to come up
	I0610 10:38:06.492418   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:06.492883   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:38:06.492915   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:38:06.492834   21834 retry.go:31] will retry after 3.670059356s: waiting for machine to come up
	I0610 10:38:10.164011   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:10.164464   21811 main.go:141] libmachine: (ha-565925) Found IP for machine: 192.168.39.208
	I0610 10:38:10.164484   21811 main.go:141] libmachine: (ha-565925) Reserving static IP address...
	I0610 10:38:10.164498   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has current primary IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:10.164790   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find host DHCP lease matching {name: "ha-565925", mac: "52:54:00:d3:d6:ef", ip: "192.168.39.208"} in network mk-ha-565925
	I0610 10:38:10.233619   21811 main.go:141] libmachine: (ha-565925) DBG | Getting to WaitForSSH function...
	I0610 10:38:10.233648   21811 main.go:141] libmachine: (ha-565925) Reserved static IP address: 192.168.39.208
	I0610 10:38:10.233662   21811 main.go:141] libmachine: (ha-565925) Waiting for SSH to be available...
	I0610 10:38:10.236307   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:10.236581   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925
	I0610 10:38:10.236605   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find defined IP address of network mk-ha-565925 interface with MAC address 52:54:00:d3:d6:ef
	I0610 10:38:10.236729   21811 main.go:141] libmachine: (ha-565925) DBG | Using SSH client type: external
	I0610 10:38:10.236758   21811 main.go:141] libmachine: (ha-565925) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa (-rw-------)
	I0610 10:38:10.236797   21811 main.go:141] libmachine: (ha-565925) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:38:10.236816   21811 main.go:141] libmachine: (ha-565925) DBG | About to run SSH command:
	I0610 10:38:10.236833   21811 main.go:141] libmachine: (ha-565925) DBG | exit 0
	I0610 10:38:10.240364   21811 main.go:141] libmachine: (ha-565925) DBG | SSH cmd err, output: exit status 255: 
	I0610 10:38:10.240389   21811 main.go:141] libmachine: (ha-565925) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0610 10:38:10.240402   21811 main.go:141] libmachine: (ha-565925) DBG | command : exit 0
	I0610 10:38:10.240409   21811 main.go:141] libmachine: (ha-565925) DBG | err     : exit status 255
	I0610 10:38:10.240418   21811 main.go:141] libmachine: (ha-565925) DBG | output  : 
	I0610 10:38:13.241461   21811 main.go:141] libmachine: (ha-565925) DBG | Getting to WaitForSSH function...
	I0610 10:38:13.244539   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.244924   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.244979   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.245122   21811 main.go:141] libmachine: (ha-565925) DBG | Using SSH client type: external
	I0610 10:38:13.245148   21811 main.go:141] libmachine: (ha-565925) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa (-rw-------)
	I0610 10:38:13.245205   21811 main.go:141] libmachine: (ha-565925) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:38:13.245226   21811 main.go:141] libmachine: (ha-565925) DBG | About to run SSH command:
	I0610 10:38:13.245247   21811 main.go:141] libmachine: (ha-565925) DBG | exit 0
	I0610 10:38:13.372605   21811 main.go:141] libmachine: (ha-565925) DBG | SSH cmd err, output: <nil>: 
	I0610 10:38:13.372854   21811 main.go:141] libmachine: (ha-565925) KVM machine creation complete!
	I0610 10:38:13.373161   21811 main.go:141] libmachine: (ha-565925) Calling .GetConfigRaw
	I0610 10:38:13.373727   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:13.373891   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:13.374083   21811 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 10:38:13.374101   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:38:13.375305   21811 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 10:38:13.375320   21811 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 10:38:13.375326   21811 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 10:38:13.375332   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:13.377839   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.378205   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.378238   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.378323   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:13.378511   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.378691   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.378889   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:13.379122   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:13.379352   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:13.379364   21811 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 10:38:13.488188   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:38:13.488221   21811 main.go:141] libmachine: Detecting the provisioner...
	I0610 10:38:13.488235   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:13.490919   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.491303   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.491328   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.491520   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:13.491692   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.491853   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.491947   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:13.492073   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:13.492224   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:13.492240   21811 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 10:38:13.601278   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 10:38:13.601344   21811 main.go:141] libmachine: found compatible host: buildroot
	I0610 10:38:13.601350   21811 main.go:141] libmachine: Provisioning with buildroot...
	I0610 10:38:13.601358   21811 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:38:13.601582   21811 buildroot.go:166] provisioning hostname "ha-565925"
	I0610 10:38:13.601602   21811 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:38:13.601751   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:13.604134   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.604396   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.604425   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.604563   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:13.604755   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.604937   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.605076   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:13.605235   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:13.605439   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:13.605455   21811 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925 && echo "ha-565925" | sudo tee /etc/hostname
	I0610 10:38:13.726337   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:38:13.726370   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:13.729270   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.729605   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.729634   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.729783   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:13.729962   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.730124   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.730279   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:13.730441   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:13.730606   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:13.730621   21811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:38:13.849953   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:38:13.849994   21811 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:38:13.850014   21811 buildroot.go:174] setting up certificates
	I0610 10:38:13.850025   21811 provision.go:84] configureAuth start
	I0610 10:38:13.850033   21811 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:38:13.850358   21811 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:38:13.853076   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.853447   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.853488   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.853577   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:13.855383   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.855633   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.855662   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.855748   21811 provision.go:143] copyHostCerts
	I0610 10:38:13.855797   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:38:13.855864   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 10:38:13.855878   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:38:13.855979   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:38:13.856105   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:38:13.856136   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 10:38:13.856147   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:38:13.856201   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:38:13.856273   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:38:13.856301   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 10:38:13.856312   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:38:13.856362   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:38:13.856449   21811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925 san=[127.0.0.1 192.168.39.208 ha-565925 localhost minikube]
	I0610 10:38:14.027814   21811 provision.go:177] copyRemoteCerts
	I0610 10:38:14.027896   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:38:14.027925   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.030316   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.030609   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.030639   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.030782   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.031038   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.031212   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.031342   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:14.114541   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 10:38:14.114600   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:38:14.137229   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 10:38:14.137297   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0610 10:38:14.159266   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 10:38:14.159335   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 10:38:14.181114   21811 provision.go:87] duration metric: took 331.078282ms to configureAuth
	I0610 10:38:14.181140   21811 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:38:14.181300   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:38:14.181368   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.183658   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.183974   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.183994   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.184189   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.184355   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.184466   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.184620   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.184806   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:14.184983   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:14.185005   21811 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:38:14.448439   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:38:14.448466   21811 main.go:141] libmachine: Checking connection to Docker...
	I0610 10:38:14.448474   21811 main.go:141] libmachine: (ha-565925) Calling .GetURL
	I0610 10:38:14.449817   21811 main.go:141] libmachine: (ha-565925) DBG | Using libvirt version 6000000
	I0610 10:38:14.451654   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.451966   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.452025   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.452184   21811 main.go:141] libmachine: Docker is up and running!
	I0610 10:38:14.452230   21811 main.go:141] libmachine: Reticulating splines...
	I0610 10:38:14.452247   21811 client.go:171] duration metric: took 23.111560156s to LocalClient.Create
	I0610 10:38:14.452273   21811 start.go:167] duration metric: took 23.111624599s to libmachine.API.Create "ha-565925"
	I0610 10:38:14.452284   21811 start.go:293] postStartSetup for "ha-565925" (driver="kvm2")
	I0610 10:38:14.452293   21811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:38:14.452309   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:14.452542   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:38:14.452567   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.454560   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.454799   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.454824   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.455008   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.455188   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.455367   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.455512   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:14.538840   21811 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:38:14.542806   21811 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:38:14.542832   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:38:14.542908   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:38:14.542996   21811 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 10:38:14.543006   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 10:38:14.543099   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 10:38:14.551864   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:38:14.573983   21811 start.go:296] duration metric: took 121.686642ms for postStartSetup
	I0610 10:38:14.574041   21811 main.go:141] libmachine: (ha-565925) Calling .GetConfigRaw
	I0610 10:38:14.574626   21811 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:38:14.577198   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.577656   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.577688   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.577907   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:38:14.578137   21811 start.go:128] duration metric: took 23.255589829s to createHost
	I0610 10:38:14.578159   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.580518   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.580885   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.580913   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.581025   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.581214   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.581374   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.581521   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.581670   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:14.581822   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:14.581832   21811 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:38:14.693318   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718015894.671688461
	
	I0610 10:38:14.693338   21811 fix.go:216] guest clock: 1718015894.671688461
	I0610 10:38:14.693345   21811 fix.go:229] Guest: 2024-06-10 10:38:14.671688461 +0000 UTC Remote: 2024-06-10 10:38:14.578150112 +0000 UTC m=+23.364236686 (delta=93.538349ms)
	I0610 10:38:14.693363   21811 fix.go:200] guest clock delta is within tolerance: 93.538349ms
	I0610 10:38:14.693368   21811 start.go:83] releasing machines lock for "ha-565925", held for 23.370894383s
	I0610 10:38:14.693384   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:14.693618   21811 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:38:14.695981   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.696299   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.696326   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.696441   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:14.696879   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:14.697099   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:14.697159   21811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:38:14.697204   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.697290   21811 ssh_runner.go:195] Run: cat /version.json
	I0610 10:38:14.697314   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.699825   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.699991   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.700212   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.700248   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.700321   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.700347   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.700356   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.700545   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.700576   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.700755   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.700767   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.700944   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:14.701039   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.701222   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:14.810876   21811 ssh_runner.go:195] Run: systemctl --version
	I0610 10:38:14.816440   21811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:38:14.973102   21811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:38:14.979604   21811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:38:14.979679   21811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:38:14.996243   21811 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 10:38:14.996269   21811 start.go:494] detecting cgroup driver to use...
	I0610 10:38:14.996336   21811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:38:15.014214   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:38:15.028552   21811 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:38:15.028604   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:38:15.042309   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:38:15.056424   21811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:38:15.182913   21811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:38:15.341472   21811 docker.go:233] disabling docker service ...
	I0610 10:38:15.341527   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:38:15.354612   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:38:15.366720   21811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:38:15.477585   21811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:38:15.594707   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:38:15.614378   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:38:15.631233   21811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:38:15.631290   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.641266   21811 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:38:15.641329   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.650895   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.660550   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.669822   21811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:38:15.679392   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.688594   21811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.704405   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.713975   21811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:38:15.722631   21811 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 10:38:15.722682   21811 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 10:38:15.734616   21811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:38:15.743367   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:38:15.853208   21811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:38:15.982454   21811 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:38:15.982525   21811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:38:15.987288   21811 start.go:562] Will wait 60s for crictl version
	I0610 10:38:15.987338   21811 ssh_runner.go:195] Run: which crictl
	I0610 10:38:15.991081   21811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:38:16.030890   21811 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:38:16.030953   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:38:16.060156   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:38:16.089799   21811 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:38:16.091090   21811 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:38:16.093471   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:16.093810   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:16.093840   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:16.093985   21811 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:38:16.097994   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:38:16.110114   21811 kubeadm.go:877] updating cluster {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 10:38:16.110207   21811 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:38:16.110254   21811 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:38:16.140789   21811 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0610 10:38:16.140872   21811 ssh_runner.go:195] Run: which lz4
	I0610 10:38:16.144426   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0610 10:38:16.144517   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 10:38:16.148171   21811 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 10:38:16.148196   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0610 10:38:17.352785   21811 crio.go:462] duration metric: took 1.208292318s to copy over tarball
	I0610 10:38:17.352869   21811 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 10:38:19.419050   21811 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.066150111s)
	I0610 10:38:19.419081   21811 crio.go:469] duration metric: took 2.066261747s to extract the tarball
	I0610 10:38:19.419091   21811 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 10:38:19.454990   21811 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:38:19.495814   21811 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:38:19.495839   21811 cache_images.go:84] Images are preloaded, skipping loading
	I0610 10:38:19.495846   21811 kubeadm.go:928] updating node { 192.168.39.208 8443 v1.30.1 crio true true} ...
	I0610 10:38:19.495969   21811 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:38:19.496037   21811 ssh_runner.go:195] Run: crio config
	I0610 10:38:19.541166   21811 cni.go:84] Creating CNI manager for ""
	I0610 10:38:19.541184   21811 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 10:38:19.541195   21811 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 10:38:19.541221   21811 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565925 NodeName:ha-565925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 10:38:19.541363   21811 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 10:38:19.541389   21811 kube-vip.go:115] generating kube-vip config ...
	I0610 10:38:19.541443   21811 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 10:38:19.557806   21811 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 10:38:19.557908   21811 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0610 10:38:19.557970   21811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:38:19.567350   21811 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 10:38:19.567431   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0610 10:38:19.576067   21811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0610 10:38:19.591463   21811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:38:19.606260   21811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0610 10:38:19.621162   21811 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0610 10:38:19.635702   21811 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 10:38:19.639242   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:38:19.649613   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:38:19.769768   21811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:38:19.786120   21811 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.208
	I0610 10:38:19.786143   21811 certs.go:194] generating shared ca certs ...
	I0610 10:38:19.786171   21811 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:19.786337   21811 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:38:19.786388   21811 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:38:19.786402   21811 certs.go:256] generating profile certs ...
	I0610 10:38:19.786462   21811 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 10:38:19.786481   21811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt with IP's: []
	I0610 10:38:20.019840   21811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt ...
	I0610 10:38:20.019874   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt: {Name:mk9042445f0af50cdbaf88bd29191a507127a8bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.020068   21811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key ...
	I0610 10:38:20.020079   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key: {Name:mkccce487881b7a4f98e7bb9c1f61d8a01ffb313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.020153   21811 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8612117b
	I0610 10:38:20.020168   21811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8612117b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.254]
	I0610 10:38:20.081806   21811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8612117b ...
	I0610 10:38:20.081837   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8612117b: {Name:mk0a55eb47942ca3b243d80b3f5f5590fb9a2fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.082000   21811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8612117b ...
	I0610 10:38:20.082015   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8612117b: {Name:mk5ae810d9d01af4bd4d963e64d1d55d2546edb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.082084   21811 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8612117b -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 10:38:20.082174   21811 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8612117b -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 10:38:20.082227   21811 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 10:38:20.082242   21811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt with IP's: []
	I0610 10:38:20.205365   21811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt ...
	I0610 10:38:20.205392   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt: {Name:mk7fbc7bf6d3d63bd22e3a09e4c6daba5500426b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.205538   21811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key ...
	I0610 10:38:20.205548   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key: {Name:mke59d4711702f0251bbe2e2eacb7af45b126045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.205607   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:38:20.205624   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:38:20.205634   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:38:20.205647   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:38:20.205660   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:38:20.205672   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:38:20.205684   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:38:20.205696   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:38:20.205741   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 10:38:20.205773   21811 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 10:38:20.205782   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:38:20.205802   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:38:20.205824   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:38:20.205849   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:38:20.205882   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:38:20.205910   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:38:20.205929   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 10:38:20.205942   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 10:38:20.206417   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:38:20.234398   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:38:20.259501   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:38:20.284642   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:38:20.309551   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 10:38:20.333927   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 10:38:20.358570   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:38:20.382499   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 10:38:20.405619   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:38:20.427023   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 10:38:20.448478   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 10:38:20.469859   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 10:38:20.485059   21811 ssh_runner.go:195] Run: openssl version
	I0610 10:38:20.490511   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 10:38:20.500073   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 10:38:20.503921   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 10:38:20.503963   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 10:38:20.509351   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 10:38:20.518750   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 10:38:20.529846   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 10:38:20.533980   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 10:38:20.534043   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 10:38:20.539285   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:38:20.552015   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:38:20.563200   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:38:20.569833   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:38:20.569905   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:38:20.580147   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:38:20.595653   21811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:38:20.600484   21811 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 10:38:20.600546   21811 kubeadm.go:391] StartCluster: {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:38:20.600638   21811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 10:38:20.600697   21811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 10:38:20.644847   21811 cri.go:89] found id: ""
	I0610 10:38:20.644930   21811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 10:38:20.656257   21811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 10:38:20.666711   21811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 10:38:20.676925   21811 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 10:38:20.676953   21811 kubeadm.go:156] found existing configuration files:
	
	I0610 10:38:20.677004   21811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 10:38:20.686681   21811 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 10:38:20.686733   21811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 10:38:20.696625   21811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 10:38:20.706415   21811 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 10:38:20.706466   21811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 10:38:20.716555   21811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 10:38:20.726695   21811 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 10:38:20.726754   21811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 10:38:20.736817   21811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 10:38:20.746438   21811 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 10:38:20.746495   21811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 10:38:20.756594   21811 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 10:38:20.856527   21811 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 10:38:20.856579   21811 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 10:38:20.979552   21811 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 10:38:20.979706   21811 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 10:38:20.979841   21811 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 10:38:21.169803   21811 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 10:38:21.172856   21811 out.go:204]   - Generating certificates and keys ...
	I0610 10:38:21.172975   21811 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 10:38:21.173075   21811 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 10:38:21.563053   21811 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 10:38:21.645799   21811 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 10:38:21.851856   21811 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 10:38:22.064223   21811 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 10:38:22.132741   21811 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 10:38:22.133044   21811 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-565925 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I0610 10:38:22.187292   21811 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 10:38:22.187483   21811 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-565925 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I0610 10:38:22.422331   21811 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 10:38:22.564015   21811 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 10:38:22.722893   21811 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 10:38:22.722990   21811 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 10:38:22.790310   21811 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 10:38:22.917415   21811 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 10:38:22.965414   21811 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 10:38:23.140970   21811 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 10:38:23.265276   21811 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 10:38:23.265901   21811 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 10:38:23.268756   21811 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 10:38:23.270635   21811 out.go:204]   - Booting up control plane ...
	I0610 10:38:23.270769   21811 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 10:38:23.270879   21811 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 10:38:23.270988   21811 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 10:38:23.289805   21811 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 10:38:23.289926   21811 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 10:38:23.290000   21811 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 10:38:23.421127   21811 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 10:38:23.421256   21811 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 10:38:24.422472   21811 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002005249s
	I0610 10:38:24.422564   21811 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 10:38:30.060319   21811 kubeadm.go:309] [api-check] The API server is healthy after 5.640390704s
	I0610 10:38:30.081713   21811 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 10:38:30.102352   21811 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 10:38:30.137788   21811 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 10:38:30.137966   21811 kubeadm.go:309] [mark-control-plane] Marking the node ha-565925 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 10:38:30.151475   21811 kubeadm.go:309] [bootstrap-token] Using token: e9zf9o.slxtdaq0q60d023m
	I0610 10:38:30.153090   21811 out.go:204]   - Configuring RBAC rules ...
	I0610 10:38:30.153209   21811 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 10:38:30.159480   21811 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 10:38:30.170946   21811 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 10:38:30.174436   21811 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 10:38:30.178756   21811 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 10:38:30.182584   21811 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 10:38:30.476495   21811 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 10:38:30.911755   21811 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 10:38:31.477200   21811 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 10:38:31.477222   21811 kubeadm.go:309] 
	I0610 10:38:31.477294   21811 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 10:38:31.477309   21811 kubeadm.go:309] 
	I0610 10:38:31.477393   21811 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 10:38:31.477401   21811 kubeadm.go:309] 
	I0610 10:38:31.477440   21811 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 10:38:31.477513   21811 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 10:38:31.477590   21811 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 10:38:31.477601   21811 kubeadm.go:309] 
	I0610 10:38:31.477680   21811 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 10:38:31.477692   21811 kubeadm.go:309] 
	I0610 10:38:31.477762   21811 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 10:38:31.477775   21811 kubeadm.go:309] 
	I0610 10:38:31.477848   21811 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 10:38:31.477945   21811 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 10:38:31.478038   21811 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 10:38:31.478047   21811 kubeadm.go:309] 
	I0610 10:38:31.478145   21811 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 10:38:31.478253   21811 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 10:38:31.478266   21811 kubeadm.go:309] 
	I0610 10:38:31.478366   21811 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token e9zf9o.slxtdaq0q60d023m \
	I0610 10:38:31.478487   21811 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 10:38:31.478517   21811 kubeadm.go:309] 	--control-plane 
	I0610 10:38:31.478522   21811 kubeadm.go:309] 
	I0610 10:38:31.478593   21811 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 10:38:31.478599   21811 kubeadm.go:309] 
	I0610 10:38:31.478681   21811 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token e9zf9o.slxtdaq0q60d023m \
	I0610 10:38:31.478790   21811 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 10:38:31.479029   21811 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 10:38:31.479047   21811 cni.go:84] Creating CNI manager for ""
	I0610 10:38:31.479055   21811 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 10:38:31.480724   21811 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 10:38:31.482150   21811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 10:38:31.487108   21811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 10:38:31.487122   21811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 10:38:31.506446   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 10:38:31.867007   21811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 10:38:31.867131   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:31.867189   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565925 minikube.k8s.io/updated_at=2024_06_10T10_38_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-565925 minikube.k8s.io/primary=true
	I0610 10:38:32.044591   21811 ops.go:34] apiserver oom_adj: -16
	I0610 10:38:32.044759   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:32.545053   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:33.045090   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:33.545542   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:34.045821   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:34.545080   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:35.045408   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:35.545827   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:36.045121   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:36.545649   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:37.045675   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:37.545113   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:38.045504   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:38.544868   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:39.044773   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:39.545795   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:40.044900   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:40.545229   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:41.045782   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:41.544927   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:42.045663   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:42.545200   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:43.044832   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:43.544974   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:43.627996   21811 kubeadm.go:1107] duration metric: took 11.760906967s to wait for elevateKubeSystemPrivileges
	W0610 10:38:43.628041   21811 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 10:38:43.628052   21811 kubeadm.go:393] duration metric: took 23.027508956s to StartCluster
	I0610 10:38:43.628074   21811 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:43.628168   21811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:38:43.628798   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:43.629098   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 10:38:43.629108   21811 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 10:38:43.629163   21811 addons.go:69] Setting storage-provisioner=true in profile "ha-565925"
	I0610 10:38:43.629090   21811 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:38:43.629200   21811 addons.go:234] Setting addon storage-provisioner=true in "ha-565925"
	I0610 10:38:43.629210   21811 addons.go:69] Setting default-storageclass=true in profile "ha-565925"
	I0610 10:38:43.629237   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:38:43.629242   21811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-565925"
	I0610 10:38:43.629201   21811 start.go:240] waiting for startup goroutines ...
	I0610 10:38:43.629326   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:38:43.629630   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:43.629661   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:43.629701   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:43.629749   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:43.644451   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45991
	I0610 10:38:43.644595   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I0610 10:38:43.644888   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:43.644910   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:43.645369   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:43.645395   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:43.645591   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:43.645613   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:43.645679   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:43.645950   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:43.646162   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:38:43.646276   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:43.646304   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:43.648486   21811 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:38:43.648689   21811 kapi.go:59] client config for ha-565925: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt", KeyFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key", CAFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 10:38:43.649117   21811 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 10:38:43.649284   21811 addons.go:234] Setting addon default-storageclass=true in "ha-565925"
	I0610 10:38:43.649313   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:38:43.649542   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:43.649566   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:43.661386   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I0610 10:38:43.661777   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:43.662315   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:43.662335   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:43.662683   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:43.662844   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:38:43.663363   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41383
	I0610 10:38:43.663824   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:43.664475   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:43.664490   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:43.664584   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:43.666822   21811 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 10:38:43.665077   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:43.668074   21811 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:38:43.668094   21811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 10:38:43.668111   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:43.668663   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:43.668711   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:43.671104   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:43.671506   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:43.671529   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:43.671863   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:43.672028   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:43.672157   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:43.672312   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:43.684209   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
	I0610 10:38:43.684652   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:43.685166   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:43.685204   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:43.685521   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:43.685714   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:38:43.687447   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:43.687710   21811 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 10:38:43.687724   21811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 10:38:43.687738   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:43.690055   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:43.690391   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:43.690422   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:43.690582   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:43.690753   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:43.690865   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:43.690984   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:43.747501   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 10:38:43.815096   21811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 10:38:43.829882   21811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:38:44.225366   21811 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0610 10:38:44.225458   21811 main.go:141] libmachine: Making call to close driver server
	I0610 10:38:44.225483   21811 main.go:141] libmachine: (ha-565925) Calling .Close
	I0610 10:38:44.225775   21811 main.go:141] libmachine: (ha-565925) DBG | Closing plugin on server side
	I0610 10:38:44.225801   21811 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:38:44.225829   21811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:38:44.225851   21811 main.go:141] libmachine: Making call to close driver server
	I0610 10:38:44.225860   21811 main.go:141] libmachine: (ha-565925) Calling .Close
	I0610 10:38:44.226116   21811 main.go:141] libmachine: (ha-565925) DBG | Closing plugin on server side
	I0610 10:38:44.226172   21811 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:38:44.226186   21811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:38:44.226297   21811 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0610 10:38:44.226311   21811 round_trippers.go:469] Request Headers:
	I0610 10:38:44.226323   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:38:44.226332   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:38:44.240471   21811 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0610 10:38:44.241103   21811 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0610 10:38:44.241118   21811 round_trippers.go:469] Request Headers:
	I0610 10:38:44.241126   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:38:44.241131   21811 round_trippers.go:473]     Content-Type: application/json
	I0610 10:38:44.241134   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:38:44.243493   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:38:44.243669   21811 main.go:141] libmachine: Making call to close driver server
	I0610 10:38:44.243685   21811 main.go:141] libmachine: (ha-565925) Calling .Close
	I0610 10:38:44.243948   21811 main.go:141] libmachine: (ha-565925) DBG | Closing plugin on server side
	I0610 10:38:44.243976   21811 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:38:44.243985   21811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:38:44.446679   21811 main.go:141] libmachine: Making call to close driver server
	I0610 10:38:44.446716   21811 main.go:141] libmachine: (ha-565925) Calling .Close
	I0610 10:38:44.447048   21811 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:38:44.447075   21811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:38:44.447086   21811 main.go:141] libmachine: Making call to close driver server
	I0610 10:38:44.447101   21811 main.go:141] libmachine: (ha-565925) Calling .Close
	I0610 10:38:44.447356   21811 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:38:44.447384   21811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:38:44.447368   21811 main.go:141] libmachine: (ha-565925) DBG | Closing plugin on server side
	I0610 10:38:44.449662   21811 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0610 10:38:44.450910   21811 addons.go:510] duration metric: took 821.796595ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0610 10:38:44.450948   21811 start.go:245] waiting for cluster config update ...
	I0610 10:38:44.450963   21811 start.go:254] writing updated cluster config ...
	I0610 10:38:44.452537   21811 out.go:177] 
	I0610 10:38:44.454465   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:38:44.454535   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:38:44.456198   21811 out.go:177] * Starting "ha-565925-m02" control-plane node in "ha-565925" cluster
	I0610 10:38:44.457305   21811 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:38:44.457329   21811 cache.go:56] Caching tarball of preloaded images
	I0610 10:38:44.457415   21811 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:38:44.457428   21811 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:38:44.457500   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:38:44.457661   21811 start.go:360] acquireMachinesLock for ha-565925-m02: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:38:44.457702   21811 start.go:364] duration metric: took 22.998µs to acquireMachinesLock for "ha-565925-m02"
	I0610 10:38:44.457719   21811 start.go:93] Provisioning new machine with config: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:38:44.457782   21811 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0610 10:38:44.459263   21811 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:38:44.459339   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:44.459362   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:44.473672   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38629
	I0610 10:38:44.474063   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:44.474521   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:44.474540   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:44.474850   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:44.475045   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetMachineName
	I0610 10:38:44.475214   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:38:44.475368   21811 start.go:159] libmachine.API.Create for "ha-565925" (driver="kvm2")
	I0610 10:38:44.475390   21811 client.go:168] LocalClient.Create starting
	I0610 10:38:44.475421   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 10:38:44.475457   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:38:44.475472   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:38:44.475539   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 10:38:44.475563   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:38:44.475575   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:38:44.475605   21811 main.go:141] libmachine: Running pre-create checks...
	I0610 10:38:44.475617   21811 main.go:141] libmachine: (ha-565925-m02) Calling .PreCreateCheck
	I0610 10:38:44.475759   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetConfigRaw
	I0610 10:38:44.476100   21811 main.go:141] libmachine: Creating machine...
	I0610 10:38:44.476113   21811 main.go:141] libmachine: (ha-565925-m02) Calling .Create
	I0610 10:38:44.476220   21811 main.go:141] libmachine: (ha-565925-m02) Creating KVM machine...
	I0610 10:38:44.477399   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found existing default KVM network
	I0610 10:38:44.477598   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found existing private KVM network mk-ha-565925
	I0610 10:38:44.477769   21811 main.go:141] libmachine: (ha-565925-m02) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02 ...
	I0610 10:38:44.477792   21811 main.go:141] libmachine: (ha-565925-m02) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 10:38:44.477817   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:44.477729   22211 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:38:44.477904   21811 main.go:141] libmachine: (ha-565925-m02) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 10:38:44.706036   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:44.705903   22211 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa...
	I0610 10:38:45.145834   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:45.145701   22211 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/ha-565925-m02.rawdisk...
	I0610 10:38:45.145871   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Writing magic tar header
	I0610 10:38:45.145888   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Writing SSH key tar header
	I0610 10:38:45.145910   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:45.145836   22211 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02 ...
	I0610 10:38:45.145995   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02
	I0610 10:38:45.146025   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02 (perms=drwx------)
	I0610 10:38:45.146038   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 10:38:45.146050   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:38:45.146057   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 10:38:45.146066   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 10:38:45.146075   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins
	I0610 10:38:45.146085   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home
	I0610 10:38:45.146096   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Skipping /home - not owner
	I0610 10:38:45.146108   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 10:38:45.146123   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 10:38:45.146130   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 10:38:45.146141   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 10:38:45.146146   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 10:38:45.146154   21811 main.go:141] libmachine: (ha-565925-m02) Creating domain...
	I0610 10:38:45.147168   21811 main.go:141] libmachine: (ha-565925-m02) define libvirt domain using xml: 
	I0610 10:38:45.147192   21811 main.go:141] libmachine: (ha-565925-m02) <domain type='kvm'>
	I0610 10:38:45.147202   21811 main.go:141] libmachine: (ha-565925-m02)   <name>ha-565925-m02</name>
	I0610 10:38:45.147210   21811 main.go:141] libmachine: (ha-565925-m02)   <memory unit='MiB'>2200</memory>
	I0610 10:38:45.147219   21811 main.go:141] libmachine: (ha-565925-m02)   <vcpu>2</vcpu>
	I0610 10:38:45.147227   21811 main.go:141] libmachine: (ha-565925-m02)   <features>
	I0610 10:38:45.147235   21811 main.go:141] libmachine: (ha-565925-m02)     <acpi/>
	I0610 10:38:45.147246   21811 main.go:141] libmachine: (ha-565925-m02)     <apic/>
	I0610 10:38:45.147254   21811 main.go:141] libmachine: (ha-565925-m02)     <pae/>
	I0610 10:38:45.147266   21811 main.go:141] libmachine: (ha-565925-m02)     
	I0610 10:38:45.147272   21811 main.go:141] libmachine: (ha-565925-m02)   </features>
	I0610 10:38:45.147280   21811 main.go:141] libmachine: (ha-565925-m02)   <cpu mode='host-passthrough'>
	I0610 10:38:45.147284   21811 main.go:141] libmachine: (ha-565925-m02)   
	I0610 10:38:45.147291   21811 main.go:141] libmachine: (ha-565925-m02)   </cpu>
	I0610 10:38:45.147296   21811 main.go:141] libmachine: (ha-565925-m02)   <os>
	I0610 10:38:45.147302   21811 main.go:141] libmachine: (ha-565925-m02)     <type>hvm</type>
	I0610 10:38:45.147307   21811 main.go:141] libmachine: (ha-565925-m02)     <boot dev='cdrom'/>
	I0610 10:38:45.147313   21811 main.go:141] libmachine: (ha-565925-m02)     <boot dev='hd'/>
	I0610 10:38:45.147318   21811 main.go:141] libmachine: (ha-565925-m02)     <bootmenu enable='no'/>
	I0610 10:38:45.147325   21811 main.go:141] libmachine: (ha-565925-m02)   </os>
	I0610 10:38:45.147330   21811 main.go:141] libmachine: (ha-565925-m02)   <devices>
	I0610 10:38:45.147338   21811 main.go:141] libmachine: (ha-565925-m02)     <disk type='file' device='cdrom'>
	I0610 10:38:45.147347   21811 main.go:141] libmachine: (ha-565925-m02)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/boot2docker.iso'/>
	I0610 10:38:45.147362   21811 main.go:141] libmachine: (ha-565925-m02)       <target dev='hdc' bus='scsi'/>
	I0610 10:38:45.147370   21811 main.go:141] libmachine: (ha-565925-m02)       <readonly/>
	I0610 10:38:45.147377   21811 main.go:141] libmachine: (ha-565925-m02)     </disk>
	I0610 10:38:45.147391   21811 main.go:141] libmachine: (ha-565925-m02)     <disk type='file' device='disk'>
	I0610 10:38:45.147402   21811 main.go:141] libmachine: (ha-565925-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 10:38:45.147410   21811 main.go:141] libmachine: (ha-565925-m02)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/ha-565925-m02.rawdisk'/>
	I0610 10:38:45.147417   21811 main.go:141] libmachine: (ha-565925-m02)       <target dev='hda' bus='virtio'/>
	I0610 10:38:45.147422   21811 main.go:141] libmachine: (ha-565925-m02)     </disk>
	I0610 10:38:45.147435   21811 main.go:141] libmachine: (ha-565925-m02)     <interface type='network'>
	I0610 10:38:45.147448   21811 main.go:141] libmachine: (ha-565925-m02)       <source network='mk-ha-565925'/>
	I0610 10:38:45.147458   21811 main.go:141] libmachine: (ha-565925-m02)       <model type='virtio'/>
	I0610 10:38:45.147471   21811 main.go:141] libmachine: (ha-565925-m02)     </interface>
	I0610 10:38:45.147484   21811 main.go:141] libmachine: (ha-565925-m02)     <interface type='network'>
	I0610 10:38:45.147493   21811 main.go:141] libmachine: (ha-565925-m02)       <source network='default'/>
	I0610 10:38:45.147498   21811 main.go:141] libmachine: (ha-565925-m02)       <model type='virtio'/>
	I0610 10:38:45.147505   21811 main.go:141] libmachine: (ha-565925-m02)     </interface>
	I0610 10:38:45.147510   21811 main.go:141] libmachine: (ha-565925-m02)     <serial type='pty'>
	I0610 10:38:45.147518   21811 main.go:141] libmachine: (ha-565925-m02)       <target port='0'/>
	I0610 10:38:45.147528   21811 main.go:141] libmachine: (ha-565925-m02)     </serial>
	I0610 10:38:45.147540   21811 main.go:141] libmachine: (ha-565925-m02)     <console type='pty'>
	I0610 10:38:45.147554   21811 main.go:141] libmachine: (ha-565925-m02)       <target type='serial' port='0'/>
	I0610 10:38:45.147564   21811 main.go:141] libmachine: (ha-565925-m02)     </console>
	I0610 10:38:45.147573   21811 main.go:141] libmachine: (ha-565925-m02)     <rng model='virtio'>
	I0610 10:38:45.147585   21811 main.go:141] libmachine: (ha-565925-m02)       <backend model='random'>/dev/random</backend>
	I0610 10:38:45.147593   21811 main.go:141] libmachine: (ha-565925-m02)     </rng>
	I0610 10:38:45.147604   21811 main.go:141] libmachine: (ha-565925-m02)     
	I0610 10:38:45.147614   21811 main.go:141] libmachine: (ha-565925-m02)     
	I0610 10:38:45.147640   21811 main.go:141] libmachine: (ha-565925-m02)   </devices>
	I0610 10:38:45.147662   21811 main.go:141] libmachine: (ha-565925-m02) </domain>
	I0610 10:38:45.147676   21811 main.go:141] libmachine: (ha-565925-m02) 
	I0610 10:38:45.154092   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:5e:8a:ca in network default
	I0610 10:38:45.154668   21811 main.go:141] libmachine: (ha-565925-m02) Ensuring networks are active...
	I0610 10:38:45.154693   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:45.155410   21811 main.go:141] libmachine: (ha-565925-m02) Ensuring network default is active
	I0610 10:38:45.155685   21811 main.go:141] libmachine: (ha-565925-m02) Ensuring network mk-ha-565925 is active
	I0610 10:38:45.156099   21811 main.go:141] libmachine: (ha-565925-m02) Getting domain xml...
	I0610 10:38:45.156771   21811 main.go:141] libmachine: (ha-565925-m02) Creating domain...
	I0610 10:38:46.358608   21811 main.go:141] libmachine: (ha-565925-m02) Waiting to get IP...
	I0610 10:38:46.359386   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:46.359869   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:46.359898   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:46.359834   22211 retry.go:31] will retry after 263.074572ms: waiting for machine to come up
	I0610 10:38:46.624279   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:46.624842   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:46.624872   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:46.624799   22211 retry.go:31] will retry after 257.651083ms: waiting for machine to come up
	I0610 10:38:46.884256   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:46.884717   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:46.884745   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:46.884673   22211 retry.go:31] will retry after 394.193995ms: waiting for machine to come up
	I0610 10:38:47.280088   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:47.280587   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:47.280617   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:47.280522   22211 retry.go:31] will retry after 458.928377ms: waiting for machine to come up
	I0610 10:38:47.741103   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:47.741634   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:47.741663   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:47.741596   22211 retry.go:31] will retry after 464.110472ms: waiting for machine to come up
	I0610 10:38:48.207484   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:48.208444   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:48.208476   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:48.208399   22211 retry.go:31] will retry after 679.15084ms: waiting for machine to come up
	I0610 10:38:48.888988   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:48.889404   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:48.889427   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:48.889356   22211 retry.go:31] will retry after 817.452236ms: waiting for machine to come up
	I0610 10:38:49.708579   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:49.709093   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:49.709123   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:49.709033   22211 retry.go:31] will retry after 1.243856521s: waiting for machine to come up
	I0610 10:38:50.954152   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:50.954633   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:50.954660   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:50.954587   22211 retry.go:31] will retry after 1.365236787s: waiting for machine to come up
	I0610 10:38:52.322096   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:52.322506   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:52.322520   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:52.322475   22211 retry.go:31] will retry after 1.597490731s: waiting for machine to come up
	I0610 10:38:53.922196   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:53.922598   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:53.922624   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:53.922547   22211 retry.go:31] will retry after 2.80774575s: waiting for machine to come up
	I0610 10:38:56.732630   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:56.733049   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:56.733071   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:56.732999   22211 retry.go:31] will retry after 2.939623483s: waiting for machine to come up
	I0610 10:38:59.674486   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:59.674976   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:59.675008   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:59.674921   22211 retry.go:31] will retry after 2.809876254s: waiting for machine to come up
	I0610 10:39:02.487793   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:02.488160   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:39:02.488183   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:39:02.488134   22211 retry.go:31] will retry after 4.506866771s: waiting for machine to come up
	I0610 10:39:06.997754   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:06.998215   21811 main.go:141] libmachine: (ha-565925-m02) Found IP for machine: 192.168.39.230
	I0610 10:39:06.998231   21811 main.go:141] libmachine: (ha-565925-m02) Reserving static IP address...
	I0610 10:39:06.998242   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has current primary IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:06.998686   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find host DHCP lease matching {name: "ha-565925-m02", mac: "52:54:00:c0:fd:0f", ip: "192.168.39.230"} in network mk-ha-565925
	I0610 10:39:07.071381   21811 main.go:141] libmachine: (ha-565925-m02) Reserved static IP address: 192.168.39.230
	I0610 10:39:07.071407   21811 main.go:141] libmachine: (ha-565925-m02) Waiting for SSH to be available...
	I0610 10:39:07.071417   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Getting to WaitForSSH function...
	I0610 10:39:07.074169   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.074624   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.074652   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.074751   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Using SSH client type: external
	I0610 10:39:07.074774   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa (-rw-------)
	I0610 10:39:07.074811   21811 main.go:141] libmachine: (ha-565925-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:39:07.074824   21811 main.go:141] libmachine: (ha-565925-m02) DBG | About to run SSH command:
	I0610 10:39:07.074886   21811 main.go:141] libmachine: (ha-565925-m02) DBG | exit 0
	I0610 10:39:07.200853   21811 main.go:141] libmachine: (ha-565925-m02) DBG | SSH cmd err, output: <nil>: 
	I0610 10:39:07.201150   21811 main.go:141] libmachine: (ha-565925-m02) KVM machine creation complete!
	I0610 10:39:07.201495   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetConfigRaw
	I0610 10:39:07.202104   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:07.202334   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:07.202505   21811 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 10:39:07.202521   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:39:07.203730   21811 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 10:39:07.203745   21811 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 10:39:07.203753   21811 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 10:39:07.203761   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.206128   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.206463   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.206488   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.206630   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.206799   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.206967   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.207154   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.207301   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:07.207520   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:07.207533   21811 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 10:39:07.320080   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:39:07.320100   21811 main.go:141] libmachine: Detecting the provisioner...
	I0610 10:39:07.320109   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.322974   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.323356   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.323388   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.323479   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.323658   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.323847   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.323992   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.324264   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:07.324429   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:07.324440   21811 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 10:39:07.433331   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 10:39:07.433413   21811 main.go:141] libmachine: found compatible host: buildroot
	I0610 10:39:07.433429   21811 main.go:141] libmachine: Provisioning with buildroot...
	I0610 10:39:07.433441   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetMachineName
	I0610 10:39:07.433729   21811 buildroot.go:166] provisioning hostname "ha-565925-m02"
	I0610 10:39:07.433758   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetMachineName
	I0610 10:39:07.433956   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.436807   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.437300   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.437330   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.437511   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.437696   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.437874   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.438015   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.438219   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:07.438436   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:07.438458   21811 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925-m02 && echo "ha-565925-m02" | sudo tee /etc/hostname
	I0610 10:39:07.562817   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925-m02
	
	I0610 10:39:07.562849   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.565629   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.565944   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.565971   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.566151   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.566335   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.566483   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.566610   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.566793   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:07.566942   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:07.566962   21811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:39:07.680972   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:39:07.681003   21811 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:39:07.681026   21811 buildroot.go:174] setting up certificates
	I0610 10:39:07.681037   21811 provision.go:84] configureAuth start
	I0610 10:39:07.681049   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetMachineName
	I0610 10:39:07.681343   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:39:07.684015   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.684354   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.684385   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.684538   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.686863   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.687282   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.687312   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.687479   21811 provision.go:143] copyHostCerts
	I0610 10:39:07.687506   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:39:07.687535   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 10:39:07.687541   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:39:07.687597   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:39:07.687669   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:39:07.687686   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 10:39:07.687692   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:39:07.687715   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:39:07.687755   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:39:07.687771   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 10:39:07.687777   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:39:07.687797   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:39:07.687843   21811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925-m02 san=[127.0.0.1 192.168.39.230 ha-565925-m02 localhost minikube]
	I0610 10:39:07.787236   21811 provision.go:177] copyRemoteCerts
	I0610 10:39:07.787289   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:39:07.787309   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.790084   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.790474   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.790504   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.790655   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.790797   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.790925   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.791097   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	I0610 10:39:07.874638   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 10:39:07.874703   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:39:07.896656   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 10:39:07.896718   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 10:39:07.919401   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 10:39:07.919464   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 10:39:07.944002   21811 provision.go:87] duration metric: took 262.952427ms to configureAuth
	I0610 10:39:07.944029   21811 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:39:07.944222   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:39:07.944310   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.946955   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.947346   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.947377   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.947579   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.947732   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.947888   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.947993   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.948173   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:07.948331   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:07.948343   21811 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:39:08.222700   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:39:08.222729   21811 main.go:141] libmachine: Checking connection to Docker...
	I0610 10:39:08.222736   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetURL
	I0610 10:39:08.224193   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Using libvirt version 6000000
	I0610 10:39:08.226332   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.226683   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.226715   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.226820   21811 main.go:141] libmachine: Docker is up and running!
	I0610 10:39:08.226833   21811 main.go:141] libmachine: Reticulating splines...
	I0610 10:39:08.226840   21811 client.go:171] duration metric: took 23.751443228s to LocalClient.Create
	I0610 10:39:08.226861   21811 start.go:167] duration metric: took 23.751493974s to libmachine.API.Create "ha-565925"
	I0610 10:39:08.226874   21811 start.go:293] postStartSetup for "ha-565925-m02" (driver="kvm2")
	I0610 10:39:08.226889   21811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:39:08.226910   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:08.227190   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:39:08.227224   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:08.229415   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.229716   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.229739   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.229873   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:08.230069   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:08.230219   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:08.230359   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	I0610 10:39:08.315120   21811 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:39:08.319099   21811 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:39:08.319128   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:39:08.319210   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:39:08.319286   21811 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 10:39:08.319295   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 10:39:08.319370   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 10:39:08.328529   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:39:08.351550   21811 start.go:296] duration metric: took 124.656239ms for postStartSetup
	I0610 10:39:08.351593   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetConfigRaw
	I0610 10:39:08.352278   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:39:08.354818   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.355275   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.355306   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.355509   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:39:08.355685   21811 start.go:128] duration metric: took 23.897893274s to createHost
	I0610 10:39:08.355706   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:08.357933   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.358236   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.358262   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.358361   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:08.358556   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:08.358690   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:08.358788   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:08.358930   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:08.359120   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:08.359134   21811 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:39:08.465359   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718015948.439565666
	
	I0610 10:39:08.465390   21811 fix.go:216] guest clock: 1718015948.439565666
	I0610 10:39:08.465400   21811 fix.go:229] Guest: 2024-06-10 10:39:08.439565666 +0000 UTC Remote: 2024-06-10 10:39:08.355695611 +0000 UTC m=+77.141782194 (delta=83.870055ms)
	I0610 10:39:08.465419   21811 fix.go:200] guest clock delta is within tolerance: 83.870055ms
	I0610 10:39:08.465424   21811 start.go:83] releasing machines lock for "ha-565925-m02", held for 24.007713656s
	I0610 10:39:08.465441   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:08.465733   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:39:08.468437   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.468743   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.468769   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.471085   21811 out.go:177] * Found network options:
	I0610 10:39:08.472391   21811 out.go:177]   - NO_PROXY=192.168.39.208
	W0610 10:39:08.473475   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 10:39:08.473514   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:08.474053   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:08.474246   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:08.474312   21811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:39:08.474351   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	W0610 10:39:08.474427   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 10:39:08.474480   21811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:39:08.474495   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:08.477592   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.477691   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.477969   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.477998   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.478085   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.478107   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.478197   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:08.478323   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:08.478400   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:08.478466   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:08.478535   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:08.478600   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:08.478693   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	I0610 10:39:08.478812   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	I0610 10:39:08.722285   21811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:39:08.728263   21811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:39:08.728339   21811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:39:08.744058   21811 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 10:39:08.744081   21811 start.go:494] detecting cgroup driver to use...
	I0610 10:39:08.744146   21811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:39:08.761715   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:39:08.775199   21811 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:39:08.775260   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:39:08.789061   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:39:08.802987   21811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:39:08.935904   21811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:39:09.078033   21811 docker.go:233] disabling docker service ...
	I0610 10:39:09.078110   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:39:09.093795   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:39:09.107299   21811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:39:09.257599   21811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:39:09.381188   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:39:09.395395   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:39:09.413435   21811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:39:09.413493   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.423621   21811 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:39:09.423678   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.433604   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.445821   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.456663   21811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:39:09.466774   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.476562   21811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.492573   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.502454   21811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:39:09.511065   21811 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 10:39:09.511117   21811 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 10:39:09.522654   21811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:39:09.532117   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:39:09.655738   21811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:39:09.788645   21811 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:39:09.788720   21811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:39:09.793973   21811 start.go:562] Will wait 60s for crictl version
	I0610 10:39:09.794028   21811 ssh_runner.go:195] Run: which crictl
	I0610 10:39:09.797564   21811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:39:09.834595   21811 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:39:09.834660   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:39:09.864781   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:39:09.893856   21811 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:39:09.895407   21811 out.go:177]   - env NO_PROXY=192.168.39.208
	I0610 10:39:09.896638   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:39:09.899419   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:09.899843   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:09.899869   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:09.900167   21811 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:39:09.904123   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:39:09.916287   21811 mustload.go:65] Loading cluster: ha-565925
	I0610 10:39:09.916463   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:39:09.916690   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:39:09.916715   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:39:09.931688   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40919
	I0610 10:39:09.932103   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:39:09.932559   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:39:09.932580   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:39:09.932874   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:39:09.933093   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:39:09.934585   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:39:09.934847   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:39:09.934869   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:39:09.949008   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
	I0610 10:39:09.949398   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:39:09.949823   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:39:09.949841   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:39:09.950165   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:39:09.950358   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:39:09.950532   21811 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.230
	I0610 10:39:09.950542   21811 certs.go:194] generating shared ca certs ...
	I0610 10:39:09.950557   21811 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:39:09.950682   21811 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:39:09.950738   21811 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:39:09.950751   21811 certs.go:256] generating profile certs ...
	I0610 10:39:09.950831   21811 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 10:39:09.950864   21811 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8484982c
	I0610 10:39:09.950883   21811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8484982c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.230 192.168.39.254]
	I0610 10:39:10.074645   21811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8484982c ...
	I0610 10:39:10.074672   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8484982c: {Name:mk6b6dcda4e45bea2edd4c7720b62d681e4e7bdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:39:10.074858   21811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8484982c ...
	I0610 10:39:10.074877   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8484982c: {Name:mk0af6f9fe1bbf80810ba512a39e7977f0d9fb54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:39:10.074969   21811 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8484982c -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 10:39:10.075124   21811 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8484982c -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 10:39:10.075296   21811 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 10:39:10.075316   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:39:10.075334   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:39:10.075354   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:39:10.075372   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:39:10.075388   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:39:10.075404   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:39:10.075460   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:39:10.075486   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:39:10.075550   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 10:39:10.075590   21811 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 10:39:10.075603   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:39:10.075637   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:39:10.075669   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:39:10.075698   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:39:10.075752   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:39:10.075786   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:39:10.075805   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 10:39:10.075822   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 10:39:10.075862   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:39:10.078847   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:39:10.079250   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:39:10.079282   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:39:10.079389   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:39:10.079593   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:39:10.079716   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:39:10.079850   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:39:10.153380   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0610 10:39:10.157647   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0610 10:39:10.168332   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0610 10:39:10.171943   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0610 10:39:10.182024   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0610 10:39:10.185959   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0610 10:39:10.195911   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0610 10:39:10.199956   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0610 10:39:10.209493   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0610 10:39:10.213095   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0610 10:39:10.222774   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0610 10:39:10.226786   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0610 10:39:10.237835   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:39:10.262884   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:39:10.284815   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:39:10.309285   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:39:10.331709   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0610 10:39:10.354663   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 10:39:10.376921   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:39:10.399148   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 10:39:10.420770   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:39:10.442307   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 10:39:10.463860   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 10:39:10.484893   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0610 10:39:10.499993   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0610 10:39:10.514852   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0610 10:39:10.531002   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0610 10:39:10.545985   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0610 10:39:10.560631   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0610 10:39:10.575797   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0610 10:39:10.592285   21811 ssh_runner.go:195] Run: openssl version
	I0610 10:39:10.597801   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 10:39:10.610697   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 10:39:10.614973   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 10:39:10.615022   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 10:39:10.621057   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 10:39:10.632134   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 10:39:10.643365   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 10:39:10.647813   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 10:39:10.647866   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 10:39:10.653463   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:39:10.663550   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:39:10.673192   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:39:10.677321   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:39:10.677370   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:39:10.682626   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:39:10.693262   21811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:39:10.697029   21811 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 10:39:10.697083   21811 kubeadm.go:928] updating node {m02 192.168.39.230 8443 v1.30.1 crio true true} ...
	I0610 10:39:10.697178   21811 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:39:10.697210   21811 kube-vip.go:115] generating kube-vip config ...
	I0610 10:39:10.697245   21811 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 10:39:10.714012   21811 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 10:39:10.714073   21811 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 10:39:10.714119   21811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:39:10.723444   21811 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 10:39:10.723513   21811 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 10:39:10.732583   21811 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0610 10:39:10.732612   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 10:39:10.732640   21811 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0610 10:39:10.732672   21811 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0610 10:39:10.732682   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 10:39:10.736809   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 10:39:10.736838   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 10:39:18.739027   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 10:39:18.739108   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 10:39:18.743681   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 10:39:18.743722   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 10:39:27.118467   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:39:27.132828   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 10:39:27.132917   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 10:39:27.137087   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 10:39:27.137124   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 10:39:27.506676   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0610 10:39:27.516027   21811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0610 10:39:27.532290   21811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:39:27.548268   21811 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 10:39:27.564880   21811 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 10:39:27.568734   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:39:27.580388   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:39:27.719636   21811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:39:27.737698   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:39:27.738032   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:39:27.738071   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:39:27.752801   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I0610 10:39:27.753218   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:39:27.753721   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:39:27.753746   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:39:27.754078   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:39:27.754285   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:39:27.754455   21811 start.go:316] joinCluster: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:39:27.754549   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 10:39:27.754567   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:39:27.757868   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:39:27.758394   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:39:27.758417   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:39:27.758672   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:39:27.758853   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:39:27.759017   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:39:27.759898   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:39:27.932467   21811 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:39:27.932518   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mirzni.yjdf9m9snyreq4hg --discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565925-m02 --control-plane --apiserver-advertise-address=192.168.39.230 --apiserver-bind-port=8443"
	I0610 10:39:49.366745   21811 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mirzni.yjdf9m9snyreq4hg --discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565925-m02 --control-plane --apiserver-advertise-address=192.168.39.230 --apiserver-bind-port=8443": (21.434201742s)
	I0610 10:39:49.366782   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 10:39:49.936205   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565925-m02 minikube.k8s.io/updated_at=2024_06_10T10_39_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-565925 minikube.k8s.io/primary=false
	I0610 10:39:50.059102   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565925-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0610 10:39:50.164744   21811 start.go:318] duration metric: took 22.410284983s to joinCluster
	I0610 10:39:50.164838   21811 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:39:50.166487   21811 out.go:177] * Verifying Kubernetes components...
	I0610 10:39:50.165194   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:39:50.167939   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:39:50.440343   21811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:39:50.502388   21811 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:39:50.502632   21811 kapi.go:59] client config for ha-565925: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt", KeyFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key", CAFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0610 10:39:50.502691   21811 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I0610 10:39:50.502936   21811 node_ready.go:35] waiting up to 6m0s for node "ha-565925-m02" to be "Ready" ...
	I0610 10:39:50.503017   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:50.503029   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:50.503039   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:50.503045   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:50.514120   21811 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 10:39:51.004139   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:51.004164   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:51.004176   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:51.004181   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:51.010316   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:39:51.504133   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:51.504154   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:51.504162   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:51.504165   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:51.508181   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:52.004000   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:52.004019   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:52.004026   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:52.004030   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:52.007220   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:52.503311   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:52.503332   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:52.503339   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:52.503343   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:52.565666   21811 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I0610 10:39:52.566288   21811 node_ready.go:53] node "ha-565925-m02" has status "Ready":"False"
	I0610 10:39:53.004046   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:53.004065   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:53.004073   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:53.004077   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:53.007233   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:53.503757   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:53.503778   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:53.503785   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:53.503788   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:53.507332   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:54.003676   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:54.003702   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:54.003713   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:54.003719   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:54.009350   21811 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:39:54.503153   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:54.503199   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:54.503209   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:54.503215   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:54.506503   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:55.003474   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:55.003500   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:55.003512   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:55.003518   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:55.007184   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:55.008071   21811 node_ready.go:53] node "ha-565925-m02" has status "Ready":"False"
	I0610 10:39:55.503386   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:55.503408   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:55.503416   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:55.503419   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:55.506765   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:56.003563   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:56.003583   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:56.003591   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:56.003595   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:56.007630   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:39:56.503451   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:56.503478   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:56.503488   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:56.503493   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:56.507452   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:57.003293   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:57.003313   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:57.003321   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:57.003325   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:57.006086   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:57.503183   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:57.503206   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:57.503214   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:57.503219   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:57.506997   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:57.508088   21811 node_ready.go:53] node "ha-565925-m02" has status "Ready":"False"
	I0610 10:39:58.003837   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:58.003857   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.003863   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.003867   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.007311   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:58.008132   21811 node_ready.go:49] node "ha-565925-m02" has status "Ready":"True"
	I0610 10:39:58.008150   21811 node_ready.go:38] duration metric: took 7.505198344s for node "ha-565925-m02" to be "Ready" ...
	I0610 10:39:58.008158   21811 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:39:58.008248   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:39:58.008257   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.008263   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.008266   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.015011   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:39:58.023036   21811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.023115   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 10:39:58.023128   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.023138   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.023145   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.025950   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.027016   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:39:58.027033   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.027040   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.027044   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.029596   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.030412   21811 pod_ready.go:92] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"True"
	I0610 10:39:58.030428   21811 pod_ready.go:81] duration metric: took 7.36967ms for pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.030436   21811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.030480   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wn6nh
	I0610 10:39:58.030492   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.030499   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.030504   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.033313   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.033962   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:39:58.033983   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.033990   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.033993   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.036194   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.036738   21811 pod_ready.go:92] pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace has status "Ready":"True"
	I0610 10:39:58.036756   21811 pod_ready.go:81] duration metric: took 6.31506ms for pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.036765   21811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.036808   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925
	I0610 10:39:58.036815   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.036837   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.036842   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.039110   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.039765   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:39:58.039784   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.039793   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.039800   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.042406   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.043194   21811 pod_ready.go:92] pod "etcd-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:39:58.043214   21811 pod_ready.go:81] duration metric: took 6.442915ms for pod "etcd-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.043226   21811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.043286   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m02
	I0610 10:39:58.043298   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.043308   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.043314   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.045880   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.046485   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:58.046503   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.046513   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.046519   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.048890   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.543724   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m02
	I0610 10:39:58.543751   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.543763   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.543771   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.547201   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:58.547764   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:58.547781   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.547788   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.547792   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.550608   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:59.043466   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m02
	I0610 10:39:59.043489   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.043497   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.043500   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.046573   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:59.047129   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:59.047144   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.047151   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.047156   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.049633   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:59.050034   21811 pod_ready.go:92] pod "etcd-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:39:59.050050   21811 pod_ready.go:81] duration metric: took 1.006817413s for pod "etcd-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:59.050063   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:59.050106   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925
	I0610 10:39:59.050117   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.050125   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.050131   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.052559   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:59.204496   21811 request.go:629] Waited for 151.324356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:39:59.204548   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:39:59.204553   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.204560   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.204564   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.207767   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:59.208449   21811 pod_ready.go:92] pod "kube-apiserver-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:39:59.208478   21811 pod_ready.go:81] duration metric: took 158.407888ms for pod "kube-apiserver-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:59.208492   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:59.403868   21811 request.go:629] Waited for 195.296949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:39:59.403977   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:39:59.403993   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.404005   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.404014   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.407224   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:59.604375   21811 request.go:629] Waited for 196.447688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:59.604450   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:59.604456   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.604464   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.604469   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.607767   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:59.804574   21811 request.go:629] Waited for 95.276273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:39:59.804625   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:39:59.804630   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.804637   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.804641   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.808860   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:40:00.004655   21811 request.go:629] Waited for 194.884512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.004735   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.004745   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:00.004753   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:00.004759   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:00.008608   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:00.209433   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:40:00.209460   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:00.209473   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:00.209478   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:00.212363   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:00.404476   21811 request.go:629] Waited for 191.368366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.404538   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.404546   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:00.404557   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:00.404572   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:00.408429   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:00.709062   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:40:00.709082   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:00.709091   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:00.709094   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:00.712283   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:00.804349   21811 request.go:629] Waited for 91.269028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.804401   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.804407   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:00.804414   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:00.804422   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:00.807309   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:01.209274   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:40:01.209295   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:01.209302   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:01.209306   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:01.212931   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:01.213927   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:01.213941   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:01.213947   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:01.213950   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:01.216958   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:01.217469   21811 pod_ready.go:102] pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace has status "Ready":"False"
	I0610 10:40:01.709420   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:40:01.709442   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:01.709452   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:01.709458   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:01.712358   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:01.713157   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:01.713175   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:01.713191   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:01.713201   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:01.716001   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:02.208854   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:40:02.208883   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.208895   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.208899   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.211845   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:02.212611   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:02.212630   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.212640   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.212645   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.215148   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:02.215545   21811 pod_ready.go:92] pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:02.215561   21811 pod_ready.go:81] duration metric: took 3.007059008s for pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:02.215570   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:02.215630   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925
	I0610 10:40:02.215640   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.215647   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.215652   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.218200   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:02.404194   21811 request.go:629] Waited for 185.334966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:02.404258   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:02.404266   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.404276   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.404283   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.407282   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:02.407833   21811 pod_ready.go:92] pod "kube-controller-manager-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:02.407851   21811 pod_ready.go:81] duration metric: took 192.275745ms for pod "kube-controller-manager-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:02.407862   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:02.604340   21811 request.go:629] Waited for 196.400035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m02
	I0610 10:40:02.604408   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m02
	I0610 10:40:02.604415   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.604426   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.604432   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.607940   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:02.803877   21811 request.go:629] Waited for 195.344559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:02.803932   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:02.803936   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.803949   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.803954   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.807838   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:02.808698   21811 pod_ready.go:92] pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:02.808721   21811 pod_ready.go:81] duration metric: took 400.852342ms for pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:02.808734   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbgnx" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:03.004936   21811 request.go:629] Waited for 196.135591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbgnx
	I0610 10:40:03.005038   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbgnx
	I0610 10:40:03.005045   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:03.005051   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:03.005055   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:03.008304   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:03.204351   21811 request.go:629] Waited for 195.385662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:03.204425   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:03.204435   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:03.204450   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:03.204463   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:03.208001   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:03.208531   21811 pod_ready.go:92] pod "kube-proxy-vbgnx" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:03.208557   21811 pod_ready.go:81] duration metric: took 399.814343ms for pod "kube-proxy-vbgnx" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:03.208580   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wdjhn" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:03.404626   21811 request.go:629] Waited for 195.970662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wdjhn
	I0610 10:40:03.404691   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wdjhn
	I0610 10:40:03.404696   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:03.404703   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:03.404706   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:03.408644   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:03.604761   21811 request.go:629] Waited for 195.395719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:03.604837   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:03.604847   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:03.604880   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:03.604892   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:03.607908   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:03.608556   21811 pod_ready.go:92] pod "kube-proxy-wdjhn" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:03.608574   21811 pod_ready.go:81] duration metric: took 399.981689ms for pod "kube-proxy-wdjhn" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:03.608584   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:03.804812   21811 request.go:629] Waited for 196.151277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925
	I0610 10:40:03.804886   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925
	I0610 10:40:03.804893   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:03.804903   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:03.804911   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:03.808282   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:04.004256   21811 request.go:629] Waited for 195.367711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:04.004336   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:04.004344   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.004356   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.004364   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.007931   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:04.008517   21811 pod_ready.go:92] pod "kube-scheduler-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:04.008536   21811 pod_ready.go:81] duration metric: took 399.94677ms for pod "kube-scheduler-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:04.008545   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:04.204678   21811 request.go:629] Waited for 196.065911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m02
	I0610 10:40:04.204750   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m02
	I0610 10:40:04.204756   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.204771   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.204777   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.208588   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:04.404776   21811 request.go:629] Waited for 195.352353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:04.404851   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:04.404861   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.404877   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.404890   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.407808   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:04.408407   21811 pod_ready.go:92] pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:04.408426   21811 pod_ready.go:81] duration metric: took 399.874222ms for pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:04.408440   21811 pod_ready.go:38] duration metric: took 6.400239578s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:40:04.408457   21811 api_server.go:52] waiting for apiserver process to appear ...
	I0610 10:40:04.408515   21811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:40:04.426888   21811 api_server.go:72] duration metric: took 14.262012429s to wait for apiserver process to appear ...
	I0610 10:40:04.426915   21811 api_server.go:88] waiting for apiserver healthz status ...
	I0610 10:40:04.426959   21811 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0610 10:40:04.431265   21811 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I0610 10:40:04.431340   21811 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I0610 10:40:04.431351   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.431361   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.431369   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.432338   21811 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 10:40:04.432479   21811 api_server.go:141] control plane version: v1.30.1
	I0610 10:40:04.432501   21811 api_server.go:131] duration metric: took 5.579091ms to wait for apiserver health ...
	I0610 10:40:04.432511   21811 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 10:40:04.603986   21811 request.go:629] Waited for 171.407019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:40:04.604055   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:40:04.604066   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.604078   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.604113   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.610843   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:40:04.616905   21811 system_pods.go:59] 17 kube-system pods found
	I0610 10:40:04.616943   21811 system_pods.go:61] "coredns-7db6d8ff4d-545cf" [7564efde-b96c-48b3-b194-bca695f7ae95] Running
	I0610 10:40:04.616961   21811 system_pods.go:61] "coredns-7db6d8ff4d-wn6nh" [9e47f047-e98b-48c8-8a33-8f790a3e8017] Running
	I0610 10:40:04.616968   21811 system_pods.go:61] "etcd-ha-565925" [527cd8fc-9ac8-4432-a265-910957e9268f] Running
	I0610 10:40:04.616973   21811 system_pods.go:61] "etcd-ha-565925-m02" [7068fe45-72fe-4204-8742-d8803e585954] Running
	I0610 10:40:04.616978   21811 system_pods.go:61] "kindnet-9jv7q" [2f97ff84-bae1-4e63-9e9a-08e9e7afe68b] Running
	I0610 10:40:04.616983   21811 system_pods.go:61] "kindnet-rnn59" [9141e131-eebc-4f51-8b55-46ff649ffaee] Running
	I0610 10:40:04.616989   21811 system_pods.go:61] "kube-apiserver-ha-565925" [75b7b060-85f2-45ca-a58e-a42a8c2d7fab] Running
	I0610 10:40:04.616994   21811 system_pods.go:61] "kube-apiserver-ha-565925-m02" [a7e4eed5-4ada-4063-a8e1-f82ed820f684] Running
	I0610 10:40:04.617003   21811 system_pods.go:61] "kube-controller-manager-ha-565925" [cd41ddc9-22af-4789-a9ea-3663a5de415b] Running
	I0610 10:40:04.617009   21811 system_pods.go:61] "kube-controller-manager-ha-565925-m02" [6b2d5860-4e09-4eeb-a9e3-24952ec3fab4] Running
	I0610 10:40:04.617015   21811 system_pods.go:61] "kube-proxy-vbgnx" [f43735ae-adc0-4fe4-992e-b640b52886d7] Running
	I0610 10:40:04.617020   21811 system_pods.go:61] "kube-proxy-wdjhn" [da3ac11b-0906-4695-80b1-f3f4f1a34de1] Running
	I0610 10:40:04.617029   21811 system_pods.go:61] "kube-scheduler-ha-565925" [74663e0a-7f9e-4211-b165-39358cb3b3e2] Running
	I0610 10:40:04.617036   21811 system_pods.go:61] "kube-scheduler-ha-565925-m02" [745d6073-f0af-4aa5-9345-38c9b88dad69] Running
	I0610 10:40:04.617044   21811 system_pods.go:61] "kube-vip-ha-565925" [039ffa3e-aac6-4bdc-a576-0158c7fb283d] Running
	I0610 10:40:04.617049   21811 system_pods.go:61] "kube-vip-ha-565925-m02" [f28be16a-38b2-4746-8b18-ab0014783aad] Running
	I0610 10:40:04.617055   21811 system_pods.go:61] "storage-provisioner" [0ca60a36-c445-4520-b857-7df39dfed848] Running
	I0610 10:40:04.617063   21811 system_pods.go:74] duration metric: took 184.546241ms to wait for pod list to return data ...
	I0610 10:40:04.617098   21811 default_sa.go:34] waiting for default service account to be created ...
	I0610 10:40:04.804530   21811 request.go:629] Waited for 187.351129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I0610 10:40:04.804582   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I0610 10:40:04.804587   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.804594   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.804598   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.808093   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:04.808307   21811 default_sa.go:45] found service account: "default"
	I0610 10:40:04.808326   21811 default_sa.go:55] duration metric: took 191.214996ms for default service account to be created ...
	I0610 10:40:04.808337   21811 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 10:40:05.004375   21811 request.go:629] Waited for 195.968568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:40:05.004450   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:40:05.004456   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:05.004471   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:05.004482   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:05.011392   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:40:05.019047   21811 system_pods.go:86] 17 kube-system pods found
	I0610 10:40:05.019082   21811 system_pods.go:89] "coredns-7db6d8ff4d-545cf" [7564efde-b96c-48b3-b194-bca695f7ae95] Running
	I0610 10:40:05.019089   21811 system_pods.go:89] "coredns-7db6d8ff4d-wn6nh" [9e47f047-e98b-48c8-8a33-8f790a3e8017] Running
	I0610 10:40:05.019094   21811 system_pods.go:89] "etcd-ha-565925" [527cd8fc-9ac8-4432-a265-910957e9268f] Running
	I0610 10:40:05.019099   21811 system_pods.go:89] "etcd-ha-565925-m02" [7068fe45-72fe-4204-8742-d8803e585954] Running
	I0610 10:40:05.019103   21811 system_pods.go:89] "kindnet-9jv7q" [2f97ff84-bae1-4e63-9e9a-08e9e7afe68b] Running
	I0610 10:40:05.019107   21811 system_pods.go:89] "kindnet-rnn59" [9141e131-eebc-4f51-8b55-46ff649ffaee] Running
	I0610 10:40:05.019112   21811 system_pods.go:89] "kube-apiserver-ha-565925" [75b7b060-85f2-45ca-a58e-a42a8c2d7fab] Running
	I0610 10:40:05.019116   21811 system_pods.go:89] "kube-apiserver-ha-565925-m02" [a7e4eed5-4ada-4063-a8e1-f82ed820f684] Running
	I0610 10:40:05.019122   21811 system_pods.go:89] "kube-controller-manager-ha-565925" [cd41ddc9-22af-4789-a9ea-3663a5de415b] Running
	I0610 10:40:05.019127   21811 system_pods.go:89] "kube-controller-manager-ha-565925-m02" [6b2d5860-4e09-4eeb-a9e3-24952ec3fab4] Running
	I0610 10:40:05.019135   21811 system_pods.go:89] "kube-proxy-vbgnx" [f43735ae-adc0-4fe4-992e-b640b52886d7] Running
	I0610 10:40:05.019139   21811 system_pods.go:89] "kube-proxy-wdjhn" [da3ac11b-0906-4695-80b1-f3f4f1a34de1] Running
	I0610 10:40:05.019147   21811 system_pods.go:89] "kube-scheduler-ha-565925" [74663e0a-7f9e-4211-b165-39358cb3b3e2] Running
	I0610 10:40:05.019151   21811 system_pods.go:89] "kube-scheduler-ha-565925-m02" [745d6073-f0af-4aa5-9345-38c9b88dad69] Running
	I0610 10:40:05.019157   21811 system_pods.go:89] "kube-vip-ha-565925" [039ffa3e-aac6-4bdc-a576-0158c7fb283d] Running
	I0610 10:40:05.019162   21811 system_pods.go:89] "kube-vip-ha-565925-m02" [f28be16a-38b2-4746-8b18-ab0014783aad] Running
	I0610 10:40:05.019169   21811 system_pods.go:89] "storage-provisioner" [0ca60a36-c445-4520-b857-7df39dfed848] Running
	I0610 10:40:05.019175   21811 system_pods.go:126] duration metric: took 210.833341ms to wait for k8s-apps to be running ...
	I0610 10:40:05.019185   21811 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 10:40:05.019242   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:40:05.036446   21811 system_svc.go:56] duration metric: took 17.251408ms WaitForService to wait for kubelet
	I0610 10:40:05.036475   21811 kubeadm.go:576] duration metric: took 14.871603454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:40:05.036494   21811 node_conditions.go:102] verifying NodePressure condition ...
	I0610 10:40:05.204902   21811 request.go:629] Waited for 168.331352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I0610 10:40:05.205006   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I0610 10:40:05.205018   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:05.205030   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:05.205036   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:05.208916   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:05.209978   21811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:40:05.209999   21811 node_conditions.go:123] node cpu capacity is 2
	I0610 10:40:05.210011   21811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:40:05.210015   21811 node_conditions.go:123] node cpu capacity is 2
	I0610 10:40:05.210020   21811 node_conditions.go:105] duration metric: took 173.520926ms to run NodePressure ...
	I0610 10:40:05.210031   21811 start.go:240] waiting for startup goroutines ...
	I0610 10:40:05.210055   21811 start.go:254] writing updated cluster config ...
	I0610 10:40:05.212059   21811 out.go:177] 
	I0610 10:40:05.213524   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:40:05.213649   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:40:05.215403   21811 out.go:177] * Starting "ha-565925-m03" control-plane node in "ha-565925" cluster
	I0610 10:40:05.216640   21811 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:40:05.216669   21811 cache.go:56] Caching tarball of preloaded images
	I0610 10:40:05.216787   21811 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:40:05.216803   21811 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:40:05.216923   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:40:05.217116   21811 start.go:360] acquireMachinesLock for ha-565925-m03: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:40:05.217156   21811 start.go:364] duration metric: took 21.755µs to acquireMachinesLock for "ha-565925-m03"
	I0610 10:40:05.217172   21811 start.go:93] Provisioning new machine with config: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:40:05.217266   21811 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0610 10:40:05.218898   21811 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:40:05.218992   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:40:05.219026   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:40:05.233379   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I0610 10:40:05.233799   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:40:05.234277   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:40:05.234301   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:40:05.234703   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:40:05.234895   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetMachineName
	I0610 10:40:05.235088   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:05.235242   21811 start.go:159] libmachine.API.Create for "ha-565925" (driver="kvm2")
	I0610 10:40:05.235271   21811 client.go:168] LocalClient.Create starting
	I0610 10:40:05.235310   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 10:40:05.235350   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:40:05.235370   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:40:05.235432   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 10:40:05.235459   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:40:05.235475   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:40:05.235502   21811 main.go:141] libmachine: Running pre-create checks...
	I0610 10:40:05.235513   21811 main.go:141] libmachine: (ha-565925-m03) Calling .PreCreateCheck
	I0610 10:40:05.235682   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetConfigRaw
	I0610 10:40:05.236048   21811 main.go:141] libmachine: Creating machine...
	I0610 10:40:05.236059   21811 main.go:141] libmachine: (ha-565925-m03) Calling .Create
	I0610 10:40:05.236219   21811 main.go:141] libmachine: (ha-565925-m03) Creating KVM machine...
	I0610 10:40:05.237677   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found existing default KVM network
	I0610 10:40:05.237786   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found existing private KVM network mk-ha-565925
	I0610 10:40:05.237946   21811 main.go:141] libmachine: (ha-565925-m03) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03 ...
	I0610 10:40:05.237977   21811 main.go:141] libmachine: (ha-565925-m03) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 10:40:05.238006   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:05.237910   22654 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:40:05.238081   21811 main.go:141] libmachine: (ha-565925-m03) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 10:40:05.460882   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:05.460758   22654 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa...
	I0610 10:40:05.512643   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:05.512536   22654 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/ha-565925-m03.rawdisk...
	I0610 10:40:05.512673   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Writing magic tar header
	I0610 10:40:05.512683   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Writing SSH key tar header
	I0610 10:40:05.512692   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:05.512643   22654 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03 ...
	I0610 10:40:05.512823   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03
	I0610 10:40:05.512846   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03 (perms=drwx------)
	I0610 10:40:05.512858   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 10:40:05.512871   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:40:05.512885   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 10:40:05.512899   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 10:40:05.512910   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 10:40:05.512917   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 10:40:05.512927   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 10:40:05.512933   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 10:40:05.512940   21811 main.go:141] libmachine: (ha-565925-m03) Creating domain...
	I0610 10:40:05.513015   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 10:40:05.513047   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins
	I0610 10:40:05.513063   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home
	I0610 10:40:05.513076   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Skipping /home - not owner
	I0610 10:40:05.513953   21811 main.go:141] libmachine: (ha-565925-m03) define libvirt domain using xml: 
	I0610 10:40:05.513972   21811 main.go:141] libmachine: (ha-565925-m03) <domain type='kvm'>
	I0610 10:40:05.513982   21811 main.go:141] libmachine: (ha-565925-m03)   <name>ha-565925-m03</name>
	I0610 10:40:05.513990   21811 main.go:141] libmachine: (ha-565925-m03)   <memory unit='MiB'>2200</memory>
	I0610 10:40:05.514000   21811 main.go:141] libmachine: (ha-565925-m03)   <vcpu>2</vcpu>
	I0610 10:40:05.514010   21811 main.go:141] libmachine: (ha-565925-m03)   <features>
	I0610 10:40:05.514021   21811 main.go:141] libmachine: (ha-565925-m03)     <acpi/>
	I0610 10:40:05.514031   21811 main.go:141] libmachine: (ha-565925-m03)     <apic/>
	I0610 10:40:05.514042   21811 main.go:141] libmachine: (ha-565925-m03)     <pae/>
	I0610 10:40:05.514057   21811 main.go:141] libmachine: (ha-565925-m03)     
	I0610 10:40:05.514070   21811 main.go:141] libmachine: (ha-565925-m03)   </features>
	I0610 10:40:05.514087   21811 main.go:141] libmachine: (ha-565925-m03)   <cpu mode='host-passthrough'>
	I0610 10:40:05.514110   21811 main.go:141] libmachine: (ha-565925-m03)   
	I0610 10:40:05.514116   21811 main.go:141] libmachine: (ha-565925-m03)   </cpu>
	I0610 10:40:05.514130   21811 main.go:141] libmachine: (ha-565925-m03)   <os>
	I0610 10:40:05.514138   21811 main.go:141] libmachine: (ha-565925-m03)     <type>hvm</type>
	I0610 10:40:05.514147   21811 main.go:141] libmachine: (ha-565925-m03)     <boot dev='cdrom'/>
	I0610 10:40:05.514155   21811 main.go:141] libmachine: (ha-565925-m03)     <boot dev='hd'/>
	I0610 10:40:05.514164   21811 main.go:141] libmachine: (ha-565925-m03)     <bootmenu enable='no'/>
	I0610 10:40:05.514178   21811 main.go:141] libmachine: (ha-565925-m03)   </os>
	I0610 10:40:05.514214   21811 main.go:141] libmachine: (ha-565925-m03)   <devices>
	I0610 10:40:05.514240   21811 main.go:141] libmachine: (ha-565925-m03)     <disk type='file' device='cdrom'>
	I0610 10:40:05.514260   21811 main.go:141] libmachine: (ha-565925-m03)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/boot2docker.iso'/>
	I0610 10:40:05.514272   21811 main.go:141] libmachine: (ha-565925-m03)       <target dev='hdc' bus='scsi'/>
	I0610 10:40:05.514286   21811 main.go:141] libmachine: (ha-565925-m03)       <readonly/>
	I0610 10:40:05.514297   21811 main.go:141] libmachine: (ha-565925-m03)     </disk>
	I0610 10:40:05.514309   21811 main.go:141] libmachine: (ha-565925-m03)     <disk type='file' device='disk'>
	I0610 10:40:05.514333   21811 main.go:141] libmachine: (ha-565925-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 10:40:05.514354   21811 main.go:141] libmachine: (ha-565925-m03)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/ha-565925-m03.rawdisk'/>
	I0610 10:40:05.514367   21811 main.go:141] libmachine: (ha-565925-m03)       <target dev='hda' bus='virtio'/>
	I0610 10:40:05.514379   21811 main.go:141] libmachine: (ha-565925-m03)     </disk>
	I0610 10:40:05.514391   21811 main.go:141] libmachine: (ha-565925-m03)     <interface type='network'>
	I0610 10:40:05.514421   21811 main.go:141] libmachine: (ha-565925-m03)       <source network='mk-ha-565925'/>
	I0610 10:40:05.514443   21811 main.go:141] libmachine: (ha-565925-m03)       <model type='virtio'/>
	I0610 10:40:05.514456   21811 main.go:141] libmachine: (ha-565925-m03)     </interface>
	I0610 10:40:05.514468   21811 main.go:141] libmachine: (ha-565925-m03)     <interface type='network'>
	I0610 10:40:05.514480   21811 main.go:141] libmachine: (ha-565925-m03)       <source network='default'/>
	I0610 10:40:05.514491   21811 main.go:141] libmachine: (ha-565925-m03)       <model type='virtio'/>
	I0610 10:40:05.514505   21811 main.go:141] libmachine: (ha-565925-m03)     </interface>
	I0610 10:40:05.514515   21811 main.go:141] libmachine: (ha-565925-m03)     <serial type='pty'>
	I0610 10:40:05.514526   21811 main.go:141] libmachine: (ha-565925-m03)       <target port='0'/>
	I0610 10:40:05.514545   21811 main.go:141] libmachine: (ha-565925-m03)     </serial>
	I0610 10:40:05.514562   21811 main.go:141] libmachine: (ha-565925-m03)     <console type='pty'>
	I0610 10:40:05.514573   21811 main.go:141] libmachine: (ha-565925-m03)       <target type='serial' port='0'/>
	I0610 10:40:05.514585   21811 main.go:141] libmachine: (ha-565925-m03)     </console>
	I0610 10:40:05.514599   21811 main.go:141] libmachine: (ha-565925-m03)     <rng model='virtio'>
	I0610 10:40:05.514611   21811 main.go:141] libmachine: (ha-565925-m03)       <backend model='random'>/dev/random</backend>
	I0610 10:40:05.514623   21811 main.go:141] libmachine: (ha-565925-m03)     </rng>
	I0610 10:40:05.514638   21811 main.go:141] libmachine: (ha-565925-m03)     
	I0610 10:40:05.514649   21811 main.go:141] libmachine: (ha-565925-m03)     
	I0610 10:40:05.514657   21811 main.go:141] libmachine: (ha-565925-m03)   </devices>
	I0610 10:40:05.514669   21811 main.go:141] libmachine: (ha-565925-m03) </domain>
	I0610 10:40:05.514680   21811 main.go:141] libmachine: (ha-565925-m03) 
	I0610 10:40:05.521327   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:2e:39:d5 in network default
	I0610 10:40:05.521938   21811 main.go:141] libmachine: (ha-565925-m03) Ensuring networks are active...
	I0610 10:40:05.521960   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:05.522743   21811 main.go:141] libmachine: (ha-565925-m03) Ensuring network default is active
	I0610 10:40:05.523100   21811 main.go:141] libmachine: (ha-565925-m03) Ensuring network mk-ha-565925 is active
	I0610 10:40:05.523540   21811 main.go:141] libmachine: (ha-565925-m03) Getting domain xml...
	I0610 10:40:05.524230   21811 main.go:141] libmachine: (ha-565925-m03) Creating domain...
	I0610 10:40:06.740424   21811 main.go:141] libmachine: (ha-565925-m03) Waiting to get IP...
	I0610 10:40:06.741319   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:06.741844   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:06.741868   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:06.741821   22654 retry.go:31] will retry after 311.64489ms: waiting for machine to come up
	I0610 10:40:07.055182   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:07.055696   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:07.055721   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:07.055648   22654 retry.go:31] will retry after 333.608993ms: waiting for machine to come up
	I0610 10:40:07.391058   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:07.391414   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:07.391439   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:07.391363   22654 retry.go:31] will retry after 429.022376ms: waiting for machine to come up
	I0610 10:40:07.822069   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:07.822478   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:07.822506   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:07.822431   22654 retry.go:31] will retry after 592.938721ms: waiting for machine to come up
	I0610 10:40:08.417392   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:08.417873   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:08.417902   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:08.417827   22654 retry.go:31] will retry after 629.38733ms: waiting for machine to come up
	I0610 10:40:09.049096   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:09.049554   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:09.049582   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:09.049513   22654 retry.go:31] will retry after 832.669925ms: waiting for machine to come up
	I0610 10:40:09.883539   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:09.884032   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:09.884063   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:09.883974   22654 retry.go:31] will retry after 829.939129ms: waiting for machine to come up
	I0610 10:40:10.715792   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:10.716263   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:10.716287   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:10.716226   22654 retry.go:31] will retry after 1.361129244s: waiting for machine to come up
	I0610 10:40:12.079856   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:12.080406   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:12.080433   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:12.080347   22654 retry.go:31] will retry after 1.717364358s: waiting for machine to come up
	I0610 10:40:13.800411   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:13.800943   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:13.800997   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:13.800898   22654 retry.go:31] will retry after 1.606518953s: waiting for machine to come up
	I0610 10:40:15.409197   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:15.409597   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:15.409621   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:15.409569   22654 retry.go:31] will retry after 1.751158033s: waiting for machine to come up
	I0610 10:40:17.162011   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:17.162609   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:17.162634   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:17.162572   22654 retry.go:31] will retry after 2.822466845s: waiting for machine to come up
	I0610 10:40:19.986284   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:19.986865   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:19.986907   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:19.986753   22654 retry.go:31] will retry after 3.077885171s: waiting for machine to come up
	I0610 10:40:23.066029   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:23.066407   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:23.066440   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:23.066379   22654 retry.go:31] will retry after 4.747341484s: waiting for machine to come up
	I0610 10:40:27.814983   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:27.815592   21811 main.go:141] libmachine: (ha-565925-m03) Found IP for machine: 192.168.39.76
	I0610 10:40:27.815635   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has current primary IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:27.815644   21811 main.go:141] libmachine: (ha-565925-m03) Reserving static IP address...
	I0610 10:40:27.816011   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find host DHCP lease matching {name: "ha-565925-m03", mac: "52:54:00:cf:67:38", ip: "192.168.39.76"} in network mk-ha-565925
	I0610 10:40:27.891235   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Getting to WaitForSSH function...
	I0610 10:40:27.891266   21811 main.go:141] libmachine: (ha-565925-m03) Reserved static IP address: 192.168.39.76
	I0610 10:40:27.891284   21811 main.go:141] libmachine: (ha-565925-m03) Waiting for SSH to be available...
	I0610 10:40:27.893996   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:27.894530   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925
	I0610 10:40:27.894556   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find defined IP address of network mk-ha-565925 interface with MAC address 52:54:00:cf:67:38
	I0610 10:40:27.894789   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Using SSH client type: external
	I0610 10:40:27.894816   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa (-rw-------)
	I0610 10:40:27.894846   21811 main.go:141] libmachine: (ha-565925-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:40:27.894863   21811 main.go:141] libmachine: (ha-565925-m03) DBG | About to run SSH command:
	I0610 10:40:27.894879   21811 main.go:141] libmachine: (ha-565925-m03) DBG | exit 0
	I0610 10:40:27.898815   21811 main.go:141] libmachine: (ha-565925-m03) DBG | SSH cmd err, output: exit status 255: 
	I0610 10:40:27.898831   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0610 10:40:27.898839   21811 main.go:141] libmachine: (ha-565925-m03) DBG | command : exit 0
	I0610 10:40:27.898850   21811 main.go:141] libmachine: (ha-565925-m03) DBG | err     : exit status 255
	I0610 10:40:27.898864   21811 main.go:141] libmachine: (ha-565925-m03) DBG | output  : 
	I0610 10:40:30.899345   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Getting to WaitForSSH function...
	I0610 10:40:30.902473   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:30.902956   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:30.902978   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:30.903221   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Using SSH client type: external
	I0610 10:40:30.903238   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa (-rw-------)
	I0610 10:40:30.903269   21811 main.go:141] libmachine: (ha-565925-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:40:30.903288   21811 main.go:141] libmachine: (ha-565925-m03) DBG | About to run SSH command:
	I0610 10:40:30.903304   21811 main.go:141] libmachine: (ha-565925-m03) DBG | exit 0
	I0610 10:40:31.026097   21811 main.go:141] libmachine: (ha-565925-m03) DBG | SSH cmd err, output: <nil>: 
	I0610 10:40:31.026393   21811 main.go:141] libmachine: (ha-565925-m03) KVM machine creation complete!
	I0610 10:40:31.026699   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetConfigRaw
	I0610 10:40:31.027355   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:31.027545   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:31.027714   21811 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 10:40:31.027730   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:40:31.028934   21811 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 10:40:31.028980   21811 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 10:40:31.029000   21811 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 10:40:31.029009   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.031448   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.031891   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.031918   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.032059   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.032242   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.032405   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.032554   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.032723   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:31.032930   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:31.032975   21811 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 10:40:31.132144   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:40:31.132179   21811 main.go:141] libmachine: Detecting the provisioner...
	I0610 10:40:31.132187   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.134873   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.135271   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.135296   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.135471   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.135664   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.135805   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.136004   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.136185   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:31.136375   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:31.136387   21811 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 10:40:31.233651   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 10:40:31.233714   21811 main.go:141] libmachine: found compatible host: buildroot
	I0610 10:40:31.233723   21811 main.go:141] libmachine: Provisioning with buildroot...
	I0610 10:40:31.233729   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetMachineName
	I0610 10:40:31.234006   21811 buildroot.go:166] provisioning hostname "ha-565925-m03"
	I0610 10:40:31.234030   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetMachineName
	I0610 10:40:31.234209   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.236834   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.237210   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.237247   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.237407   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.237594   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.237872   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.238052   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.238228   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:31.238430   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:31.238446   21811 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925-m03 && echo "ha-565925-m03" | sudo tee /etc/hostname
	I0610 10:40:31.350884   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925-m03
	
	I0610 10:40:31.350907   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.353726   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.354160   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.354182   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.354412   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.354603   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.354783   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.354949   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.355123   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:31.355327   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:31.355350   21811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:40:31.465909   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:40:31.465939   21811 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:40:31.465953   21811 buildroot.go:174] setting up certificates
	I0610 10:40:31.465961   21811 provision.go:84] configureAuth start
	I0610 10:40:31.465968   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetMachineName
	I0610 10:40:31.466250   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:40:31.468714   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.469095   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.469120   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.469309   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.471382   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.471712   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.471743   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.471880   21811 provision.go:143] copyHostCerts
	I0610 10:40:31.471909   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:40:31.471949   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 10:40:31.471961   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:40:31.472043   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:40:31.472135   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:40:31.472160   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 10:40:31.472179   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:40:31.472224   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:40:31.472286   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:40:31.472308   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 10:40:31.472315   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:40:31.472354   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:40:31.472424   21811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925-m03 san=[127.0.0.1 192.168.39.76 ha-565925-m03 localhost minikube]
	I0610 10:40:31.735807   21811 provision.go:177] copyRemoteCerts
	I0610 10:40:31.735855   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:40:31.735876   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.738723   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.739067   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.739095   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.739258   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.739451   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.739638   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.739770   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:40:31.822436   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 10:40:31.822499   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:40:31.846296   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 10:40:31.846353   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 10:40:31.869575   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 10:40:31.869667   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 10:40:31.892496   21811 provision.go:87] duration metric: took 426.521202ms to configureAuth
	I0610 10:40:31.892530   21811 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:40:31.892761   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:40:31.892826   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.895916   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.896439   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.896465   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.896683   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.896872   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.897023   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.897159   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.897295   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:31.897443   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:31.897457   21811 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:40:32.146262   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:40:32.146294   21811 main.go:141] libmachine: Checking connection to Docker...
	I0610 10:40:32.146304   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetURL
	I0610 10:40:32.147674   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Using libvirt version 6000000
	I0610 10:40:32.150109   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.150508   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.150538   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.150671   21811 main.go:141] libmachine: Docker is up and running!
	I0610 10:40:32.150689   21811 main.go:141] libmachine: Reticulating splines...
	I0610 10:40:32.150697   21811 client.go:171] duration metric: took 26.915416102s to LocalClient.Create
	I0610 10:40:32.150723   21811 start.go:167] duration metric: took 26.915480978s to libmachine.API.Create "ha-565925"
	I0610 10:40:32.150735   21811 start.go:293] postStartSetup for "ha-565925-m03" (driver="kvm2")
	I0610 10:40:32.150746   21811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:40:32.150773   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:32.151027   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:40:32.151058   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:32.153169   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.153458   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.153478   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.153603   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:32.153773   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:32.153971   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:32.154128   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:40:32.230935   21811 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:40:32.234722   21811 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:40:32.234745   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:40:32.234812   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:40:32.234894   21811 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 10:40:32.234906   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 10:40:32.235015   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 10:40:32.244311   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:40:32.269943   21811 start.go:296] duration metric: took 119.190727ms for postStartSetup
	I0610 10:40:32.269984   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetConfigRaw
	I0610 10:40:32.270553   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:40:32.273049   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.273478   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.273503   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.273761   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:40:32.273948   21811 start.go:128] duration metric: took 27.056671199s to createHost
	I0610 10:40:32.273970   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:32.275856   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.276263   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.276285   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.276443   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:32.276614   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:32.276782   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:32.276971   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:32.277203   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:32.277356   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:32.277369   21811 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:40:32.373481   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718016032.353629638
	
	I0610 10:40:32.373505   21811 fix.go:216] guest clock: 1718016032.353629638
	I0610 10:40:32.373513   21811 fix.go:229] Guest: 2024-06-10 10:40:32.353629638 +0000 UTC Remote: 2024-06-10 10:40:32.273959511 +0000 UTC m=+161.060046086 (delta=79.670127ms)
	I0610 10:40:32.373530   21811 fix.go:200] guest clock delta is within tolerance: 79.670127ms
	I0610 10:40:32.373537   21811 start.go:83] releasing machines lock for "ha-565925-m03", held for 27.156372466s
	I0610 10:40:32.373560   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:32.373858   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:40:32.376677   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.377089   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.377120   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.379462   21811 out.go:177] * Found network options:
	I0610 10:40:32.380859   21811 out.go:177]   - NO_PROXY=192.168.39.208,192.168.39.230
	W0610 10:40:32.382020   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 10:40:32.382052   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 10:40:32.382065   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:32.382567   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:32.382781   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:32.382883   21811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:40:32.382921   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	W0610 10:40:32.382997   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 10:40:32.383026   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 10:40:32.383079   21811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:40:32.383102   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:32.385756   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.386850   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.386886   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.387337   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.387373   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.387398   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.387555   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:32.387648   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:32.387726   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:32.387797   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:32.387858   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:32.387957   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:32.388038   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:40:32.388114   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:40:32.620387   21811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:40:32.626506   21811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:40:32.626584   21811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:40:32.644521   21811 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 10:40:32.644548   21811 start.go:494] detecting cgroup driver to use...
	I0610 10:40:32.644618   21811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:40:32.660410   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:40:32.673631   21811 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:40:32.673681   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:40:32.687825   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:40:32.702644   21811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:40:32.822310   21811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:40:32.961150   21811 docker.go:233] disabling docker service ...
	I0610 10:40:32.961243   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:40:32.975285   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:40:32.987979   21811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:40:33.128167   21811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:40:33.255549   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:40:33.268974   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:40:33.286308   21811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:40:33.286375   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.297044   21811 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:40:33.297119   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.307368   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.318217   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.328550   21811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:40:33.339085   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.349165   21811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.365797   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.375766   21811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:40:33.384682   21811 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 10:40:33.384739   21811 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 10:40:33.398360   21811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:40:33.407882   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:40:33.525781   21811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:40:33.675216   21811 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:40:33.675278   21811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:40:33.680297   21811 start.go:562] Will wait 60s for crictl version
	I0610 10:40:33.680354   21811 ssh_runner.go:195] Run: which crictl
	I0610 10:40:33.684191   21811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:40:33.724690   21811 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:40:33.724754   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:40:33.758087   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:40:33.791645   21811 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:40:33.793142   21811 out.go:177]   - env NO_PROXY=192.168.39.208
	I0610 10:40:33.794452   21811 out.go:177]   - env NO_PROXY=192.168.39.208,192.168.39.230
	I0610 10:40:33.795713   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:40:33.798904   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:33.799413   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:33.799444   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:33.799634   21811 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:40:33.803804   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:40:33.815317   21811 mustload.go:65] Loading cluster: ha-565925
	I0610 10:40:33.815593   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:40:33.815844   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:40:33.815883   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:40:33.830974   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44969
	I0610 10:40:33.831407   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:40:33.831916   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:40:33.831936   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:40:33.832243   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:40:33.832446   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:40:33.834077   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:40:33.834356   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:40:33.834394   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:40:33.849334   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0610 10:40:33.849815   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:40:33.850272   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:40:33.850296   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:40:33.850612   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:40:33.850814   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:40:33.850997   21811 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.76
	I0610 10:40:33.851011   21811 certs.go:194] generating shared ca certs ...
	I0610 10:40:33.851029   21811 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:40:33.851175   21811 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:40:33.851237   21811 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:40:33.851250   21811 certs.go:256] generating profile certs ...
	I0610 10:40:33.851325   21811 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 10:40:33.851351   21811 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.512d8c09
	I0610 10:40:33.851364   21811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.512d8c09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.230 192.168.39.76 192.168.39.254]
	I0610 10:40:33.925414   21811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.512d8c09 ...
	I0610 10:40:33.925443   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.512d8c09: {Name:mkae780a0d2dbc4ec4fdafac1ace76b0fd2d0fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:40:33.925607   21811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.512d8c09 ...
	I0610 10:40:33.925619   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.512d8c09: {Name:mk6129f5d875915e5790355da934688584ed0ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:40:33.925689   21811 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.512d8c09 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 10:40:33.925812   21811 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.512d8c09 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 10:40:33.925940   21811 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 10:40:33.925959   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:40:33.925979   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:40:33.925995   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:40:33.926014   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:40:33.926032   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:40:33.926050   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:40:33.926068   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:40:33.926086   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:40:33.926144   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 10:40:33.926175   21811 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 10:40:33.926186   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:40:33.926205   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:40:33.926227   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:40:33.926249   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:40:33.926287   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:40:33.926313   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 10:40:33.926326   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:40:33.926338   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 10:40:33.926367   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:40:33.929419   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:40:33.929918   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:40:33.929942   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:40:33.930107   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:40:33.930324   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:40:33.930475   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:40:33.930637   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:40:34.005310   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0610 10:40:34.011309   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0610 10:40:34.022850   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0610 10:40:34.026923   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0610 10:40:34.037843   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0610 10:40:34.041779   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0610 10:40:34.052470   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0610 10:40:34.056818   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0610 10:40:34.067304   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0610 10:40:34.072036   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0610 10:40:34.082439   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0610 10:40:34.087027   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0610 10:40:34.099447   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:40:34.123075   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:40:34.147023   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:40:34.170034   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:40:34.192193   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0610 10:40:34.213773   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 10:40:34.234759   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:40:34.257207   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 10:40:34.279806   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 10:40:34.303155   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:40:34.326009   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 10:40:34.347846   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0610 10:40:34.363438   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0610 10:40:34.379176   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0610 10:40:34.394884   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0610 10:40:34.411721   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0610 10:40:34.427602   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0610 10:40:34.445919   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0610 10:40:34.462559   21811 ssh_runner.go:195] Run: openssl version
	I0610 10:40:34.469091   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 10:40:34.480339   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 10:40:34.484773   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 10:40:34.484835   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 10:40:34.490314   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:40:34.500730   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:40:34.511174   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:40:34.515178   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:40:34.515237   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:40:34.520333   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:40:34.530433   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 10:40:34.540090   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 10:40:34.544131   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 10:40:34.544191   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 10:40:34.549491   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 10:40:34.558986   21811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:40:34.562931   21811 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 10:40:34.562987   21811 kubeadm.go:928] updating node {m03 192.168.39.76 8443 v1.30.1 crio true true} ...
	I0610 10:40:34.563068   21811 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:40:34.563092   21811 kube-vip.go:115] generating kube-vip config ...
	I0610 10:40:34.563122   21811 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 10:40:34.577712   21811 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 10:40:34.577772   21811 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 10:40:34.577841   21811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:40:34.586773   21811 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 10:40:34.586835   21811 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 10:40:34.596214   21811 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0610 10:40:34.596233   21811 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0610 10:40:34.596242   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 10:40:34.596255   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 10:40:34.596274   21811 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0610 10:40:34.596309   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 10:40:34.596332   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 10:40:34.596311   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:40:34.605576   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 10:40:34.605612   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 10:40:34.605907   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 10:40:34.605944   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 10:40:34.627908   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 10:40:34.628008   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 10:40:34.730261   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 10:40:34.730305   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 10:40:35.509956   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0610 10:40:35.520028   21811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0610 10:40:35.536892   21811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:40:35.554633   21811 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 10:40:35.571335   21811 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 10:40:35.575481   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:40:35.588334   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:40:35.712100   21811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:40:35.729688   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:40:35.730051   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:40:35.730103   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:40:35.745864   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38503
	I0610 10:40:35.746283   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:40:35.746807   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:40:35.746830   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:40:35.747214   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:40:35.747413   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:40:35.747529   21811 start.go:316] joinCluster: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:40:35.747683   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 10:40:35.747702   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:40:35.750997   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:40:35.751410   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:40:35.751430   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:40:35.751614   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:40:35.751776   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:40:35.751933   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:40:35.752055   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:40:36.030423   21811 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:40:36.030481   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h1yzks.ltnn52dog1u09foz --discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565925-m03 --control-plane --apiserver-advertise-address=192.168.39.76 --apiserver-bind-port=8443"
	I0610 10:40:59.310507   21811 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h1yzks.ltnn52dog1u09foz --discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565925-m03 --control-plane --apiserver-advertise-address=192.168.39.76 --apiserver-bind-port=8443": (23.279996408s)
	I0610 10:40:59.310545   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 10:40:59.862689   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565925-m03 minikube.k8s.io/updated_at=2024_06_10T10_40_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-565925 minikube.k8s.io/primary=false
	I0610 10:40:59.991741   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565925-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0610 10:41:00.102879   21811 start.go:318] duration metric: took 24.355343976s to joinCluster
	I0610 10:41:00.102952   21811 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:41:00.104331   21811 out.go:177] * Verifying Kubernetes components...
	I0610 10:41:00.103248   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:41:00.105592   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:41:00.415091   21811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:41:00.451391   21811 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:41:00.451658   21811 kapi.go:59] client config for ha-565925: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt", KeyFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key", CAFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0610 10:41:00.451721   21811 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I0610 10:41:00.451896   21811 node_ready.go:35] waiting up to 6m0s for node "ha-565925-m03" to be "Ready" ...
	I0610 10:41:00.451955   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:00.451963   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:00.451970   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:00.451973   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:00.457416   21811 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:41:00.952872   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:00.952895   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:00.952905   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:00.952914   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:00.956651   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:01.452202   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:01.452234   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:01.452244   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:01.452249   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:01.455691   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:01.952818   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:01.952853   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:01.952875   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:01.952879   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:01.956530   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:02.452074   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:02.452096   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:02.452110   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:02.452115   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:02.455726   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:02.456257   21811 node_ready.go:53] node "ha-565925-m03" has status "Ready":"False"
	I0610 10:41:02.952809   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:02.952880   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:02.952891   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:02.952896   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:02.956516   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:03.452358   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:03.452380   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:03.452388   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:03.452393   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:03.456184   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:03.952483   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:03.952504   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:03.952513   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:03.952519   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:03.956051   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:04.452262   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:04.452284   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:04.452291   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:04.452296   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:04.455788   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:04.952016   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:04.952068   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:04.952079   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:04.952091   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:04.955611   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:04.956246   21811 node_ready.go:53] node "ha-565925-m03" has status "Ready":"False"
	I0610 10:41:05.452534   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:05.452557   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:05.452565   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:05.452568   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:05.456632   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:41:05.952150   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:05.952171   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:05.952179   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:05.952183   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:05.955673   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:06.452594   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:06.452618   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:06.452626   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:06.452630   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:06.455526   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:06.952469   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:06.952493   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:06.952504   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:06.952510   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:06.955666   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:06.956481   21811 node_ready.go:53] node "ha-565925-m03" has status "Ready":"False"
	I0610 10:41:07.452930   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:07.452996   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.453007   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.453013   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.455849   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.456349   21811 node_ready.go:49] node "ha-565925-m03" has status "Ready":"True"
	I0610 10:41:07.456366   21811 node_ready.go:38] duration metric: took 7.004457662s for node "ha-565925-m03" to be "Ready" ...
	I0610 10:41:07.456374   21811 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:41:07.456426   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:41:07.456435   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.456443   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.456448   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.463172   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:41:07.470000   21811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.470075   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 10:41:07.470083   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.470090   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.470096   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.473333   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:07.474159   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:07.474176   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.474186   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.474191   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.476705   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.477267   21811 pod_ready.go:92] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:07.477285   21811 pod_ready.go:81] duration metric: took 7.259942ms for pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.477295   21811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.477354   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wn6nh
	I0610 10:41:07.477364   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.477373   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.477378   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.482359   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:41:07.482941   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:07.482953   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.482960   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.482964   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.485814   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.486307   21811 pod_ready.go:92] pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:07.486324   21811 pod_ready.go:81] duration metric: took 9.021797ms for pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.486339   21811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.486403   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925
	I0610 10:41:07.486413   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.486422   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.486429   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.489824   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:07.490287   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:07.490305   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.490315   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.490320   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.492347   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.492861   21811 pod_ready.go:92] pod "etcd-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:07.492877   21811 pod_ready.go:81] duration metric: took 6.531211ms for pod "etcd-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.492888   21811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.492989   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m02
	I0610 10:41:07.493003   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.493013   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.493023   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.495308   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.495958   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:07.495998   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.496026   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.496036   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.498709   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.499086   21811 pod_ready.go:92] pod "etcd-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:07.499098   21811 pod_ready.go:81] duration metric: took 6.204218ms for pod "etcd-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.499106   21811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.653468   21811 request.go:629] Waited for 154.307525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:07.653523   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:07.653529   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.653560   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.653569   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.657367   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:07.853472   21811 request.go:629] Waited for 195.469114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:07.853535   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:07.853542   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.853553   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.853562   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.856466   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:08.053541   21811 request.go:629] Waited for 54.246552ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:08.053604   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:08.053610   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:08.053620   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:08.053637   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:08.057135   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:08.253024   21811 request.go:629] Waited for 195.35667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:08.253099   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:08.253108   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:08.253126   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:08.253133   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:08.259607   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:41:08.499397   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:08.499428   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:08.499436   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:08.499439   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:08.502919   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:08.653923   21811 request.go:629] Waited for 150.309174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:08.653992   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:08.653998   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:08.654005   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:08.654009   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:08.657112   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:08.999902   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:08.999932   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:08.999940   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:08.999944   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:09.002885   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:09.053672   21811 request.go:629] Waited for 50.2193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:09.053737   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:09.053745   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:09.053759   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:09.053766   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:09.056943   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:09.500130   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:09.500147   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:09.500155   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:09.500160   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:09.503204   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:09.503851   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:09.503866   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:09.503874   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:09.503878   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:09.506739   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:09.507205   21811 pod_ready.go:102] pod "etcd-ha-565925-m03" in "kube-system" namespace has status "Ready":"False"
	I0610 10:41:09.999578   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:09.999600   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:09.999610   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:09.999617   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:10.003109   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:10.003722   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:10.003738   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:10.003745   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:10.003749   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:10.006533   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:10.499652   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:10.499671   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:10.499681   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:10.499688   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:10.503181   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:10.503914   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:10.503929   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:10.503958   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:10.503968   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:10.506488   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:10.999849   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:10.999873   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:10.999884   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:10.999889   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:11.003058   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:11.003757   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:11.003774   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:11.003781   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:11.003784   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:11.006504   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:11.499519   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:11.499541   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:11.499553   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:11.499558   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:11.502468   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:11.503108   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:11.503123   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:11.503133   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:11.503136   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:11.505496   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:11.999588   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:11.999610   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:11.999618   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:11.999622   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:12.002908   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:12.003613   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:12.003627   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:12.003634   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:12.003638   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:12.006896   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:12.007399   21811 pod_ready.go:102] pod "etcd-ha-565925-m03" in "kube-system" namespace has status "Ready":"False"
	I0610 10:41:12.499681   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:12.499702   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:12.499709   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:12.499714   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:12.502539   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:12.503306   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:12.503325   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:12.503335   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:12.503341   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:12.505852   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:13.000227   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:13.000277   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:13.000289   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:13.000296   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:13.003436   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:13.004420   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:13.004438   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:13.004456   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:13.004465   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:13.007317   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:13.500393   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:13.500420   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:13.500430   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:13.500436   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:13.504782   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:41:13.505356   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:13.505371   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:13.505379   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:13.505385   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:13.508006   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:13.999714   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:13.999732   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:13.999741   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:13.999746   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:14.003320   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:14.004114   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:14.004131   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:14.004141   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:14.004145   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:14.007268   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:14.007850   21811 pod_ready.go:102] pod "etcd-ha-565925-m03" in "kube-system" namespace has status "Ready":"False"
	I0610 10:41:14.500166   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:14.500185   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:14.500192   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:14.500194   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:14.506559   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:41:14.507357   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:14.507375   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:14.507385   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:14.507390   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:14.509902   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:14.999744   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:14.999771   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:14.999781   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:14.999786   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.004161   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:41:15.004894   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:15.004910   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.004928   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.004932   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.007751   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:15.008310   21811 pod_ready.go:92] pod "etcd-ha-565925-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:15.008330   21811 pod_ready.go:81] duration metric: took 7.509218371s for pod "etcd-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.008346   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.008408   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925
	I0610 10:41:15.008415   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.008422   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.008429   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.011990   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:15.012993   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:15.013046   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.013060   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.013066   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.020219   21811 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 10:41:15.020787   21811 pod_ready.go:92] pod "kube-apiserver-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:15.020808   21811 pod_ready.go:81] duration metric: took 12.4522ms for pod "kube-apiserver-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.020821   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.020886   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:41:15.020896   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.020906   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.020914   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.023901   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:15.024541   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:15.024558   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.024568   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.024577   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.027137   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:15.027507   21811 pod_ready.go:92] pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:15.027524   21811 pod_ready.go:81] duration metric: took 6.696061ms for pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.027536   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.027605   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m03
	I0610 10:41:15.027618   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.027628   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.027633   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.030410   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:15.053192   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:15.053217   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.053226   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.053230   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.056115   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:15.528482   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m03
	I0610 10:41:15.528501   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.528509   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.528513   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.532201   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:15.532997   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:15.533012   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.533019   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.533023   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.535607   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:16.027885   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m03
	I0610 10:41:16.027909   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.027917   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.027923   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.031124   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:16.031926   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:16.031991   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.032006   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.032012   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.034578   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:16.035129   21811 pod_ready.go:92] pod "kube-apiserver-ha-565925-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:16.035149   21811 pod_ready.go:81] duration metric: took 1.007600126s for pod "kube-apiserver-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.035158   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.053585   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925
	I0610 10:41:16.053608   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.053616   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.053620   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.057108   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:16.253181   21811 request.go:629] Waited for 195.000831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:16.253739   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:16.253746   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.253755   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.253759   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.256995   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:16.257598   21811 pod_ready.go:92] pod "kube-controller-manager-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:16.257615   21811 pod_ready.go:81] duration metric: took 222.449236ms for pod "kube-controller-manager-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.257625   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.453184   21811 request.go:629] Waited for 195.504869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m02
	I0610 10:41:16.453245   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m02
	I0610 10:41:16.453257   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.453277   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.453284   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.456483   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:16.653657   21811 request.go:629] Waited for 196.360099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:16.653706   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:16.653711   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.653717   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.653721   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.656972   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:16.657733   21811 pod_ready.go:92] pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:16.657756   21811 pod_ready.go:81] duration metric: took 400.123605ms for pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.657769   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.853705   21811 request.go:629] Waited for 195.851399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m03
	I0610 10:41:16.853763   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m03
	I0610 10:41:16.853768   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.853774   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.853780   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.857401   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:17.053481   21811 request.go:629] Waited for 195.377671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:17.053543   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:17.053548   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:17.053554   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:17.053558   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:17.056457   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:17.056869   21811 pod_ready.go:92] pod "kube-controller-manager-ha-565925-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:17.056887   21811 pod_ready.go:81] duration metric: took 399.110601ms for pod "kube-controller-manager-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:17.056897   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d44ft" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:17.253959   21811 request.go:629] Waited for 197.000123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d44ft
	I0610 10:41:17.254034   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d44ft
	I0610 10:41:17.254039   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:17.254046   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:17.254052   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:17.259452   21811 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:41:17.453381   21811 request.go:629] Waited for 193.283661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:17.453443   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:17.453457   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:17.453467   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:17.453478   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:17.456665   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:17.457111   21811 pod_ready.go:92] pod "kube-proxy-d44ft" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:17.457130   21811 pod_ready.go:81] duration metric: took 400.226885ms for pod "kube-proxy-d44ft" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:17.457143   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbgnx" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:17.653265   21811 request.go:629] Waited for 196.03805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbgnx
	I0610 10:41:17.653330   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbgnx
	I0610 10:41:17.653338   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:17.653352   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:17.653360   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:17.657669   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:41:17.853857   21811 request.go:629] Waited for 195.217398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:17.853945   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:17.853956   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:17.853967   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:17.853973   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:17.857603   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:17.858165   21811 pod_ready.go:92] pod "kube-proxy-vbgnx" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:17.858196   21811 pod_ready.go:81] duration metric: took 401.034656ms for pod "kube-proxy-vbgnx" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:17.858210   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wdjhn" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:18.053438   21811 request.go:629] Waited for 195.16165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wdjhn
	I0610 10:41:18.053510   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wdjhn
	I0610 10:41:18.053515   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:18.053522   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:18.053528   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:18.061200   21811 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 10:41:18.253302   21811 request.go:629] Waited for 191.397214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:18.253360   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:18.253365   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:18.253372   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:18.253375   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:18.256843   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:18.257608   21811 pod_ready.go:92] pod "kube-proxy-wdjhn" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:18.257631   21811 pod_ready.go:81] duration metric: took 399.412602ms for pod "kube-proxy-wdjhn" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:18.257645   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:18.453730   21811 request.go:629] Waited for 196.017576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925
	I0610 10:41:18.453827   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925
	I0610 10:41:18.453838   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:18.453849   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:18.453858   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:18.456757   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:18.653818   21811 request.go:629] Waited for 196.381655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:18.653871   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:18.653876   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:18.653883   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:18.653887   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:18.657171   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:18.657641   21811 pod_ready.go:92] pod "kube-scheduler-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:18.657659   21811 pod_ready.go:81] duration metric: took 400.006901ms for pod "kube-scheduler-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:18.657668   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:18.853312   21811 request.go:629] Waited for 195.566307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m02
	I0610 10:41:18.853373   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m02
	I0610 10:41:18.853379   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:18.853386   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:18.853390   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:18.856573   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:19.053424   21811 request.go:629] Waited for 196.332307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:19.053489   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:19.053494   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.053501   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.053505   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.056878   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:19.057621   21811 pod_ready.go:92] pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:19.057644   21811 pod_ready.go:81] duration metric: took 399.969423ms for pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:19.057657   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:19.253647   21811 request.go:629] Waited for 195.908915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m03
	I0610 10:41:19.253728   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m03
	I0610 10:41:19.253741   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.253751   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.253760   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.257377   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:19.453389   21811 request.go:629] Waited for 195.357232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:19.453455   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:19.453462   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.453472   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.453477   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.456783   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:19.457428   21811 pod_ready.go:92] pod "kube-scheduler-ha-565925-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:19.457447   21811 pod_ready.go:81] duration metric: took 399.782461ms for pod "kube-scheduler-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:19.457458   21811 pod_ready.go:38] duration metric: took 12.001075789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:41:19.457474   21811 api_server.go:52] waiting for apiserver process to appear ...
	I0610 10:41:19.457524   21811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:41:19.472808   21811 api_server.go:72] duration metric: took 19.36982533s to wait for apiserver process to appear ...
	I0610 10:41:19.472837   21811 api_server.go:88] waiting for apiserver healthz status ...
	I0610 10:41:19.472856   21811 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0610 10:41:19.478589   21811 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I0610 10:41:19.478658   21811 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I0610 10:41:19.478666   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.478676   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.478686   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.479654   21811 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 10:41:19.479739   21811 api_server.go:141] control plane version: v1.30.1
	I0610 10:41:19.479752   21811 api_server.go:131] duration metric: took 6.910869ms to wait for apiserver health ...
	I0610 10:41:19.479759   21811 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 10:41:19.653486   21811 request.go:629] Waited for 173.661312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:41:19.653542   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:41:19.653547   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.653559   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.653563   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.660708   21811 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 10:41:19.667056   21811 system_pods.go:59] 24 kube-system pods found
	I0610 10:41:19.667101   21811 system_pods.go:61] "coredns-7db6d8ff4d-545cf" [7564efde-b96c-48b3-b194-bca695f7ae95] Running
	I0610 10:41:19.667109   21811 system_pods.go:61] "coredns-7db6d8ff4d-wn6nh" [9e47f047-e98b-48c8-8a33-8f790a3e8017] Running
	I0610 10:41:19.667115   21811 system_pods.go:61] "etcd-ha-565925" [527cd8fc-9ac8-4432-a265-910957e9268f] Running
	I0610 10:41:19.667121   21811 system_pods.go:61] "etcd-ha-565925-m02" [7068fe45-72fe-4204-8742-d8803e585954] Running
	I0610 10:41:19.667128   21811 system_pods.go:61] "etcd-ha-565925-m03" [91c6bcb4-59b4-4a31-a5e4-f32d9491b566] Running
	I0610 10:41:19.667133   21811 system_pods.go:61] "kindnet-9jv7q" [2f97ff84-bae1-4e63-9e9a-08e9e7afe68b] Running
	I0610 10:41:19.667139   21811 system_pods.go:61] "kindnet-9tcng" [c47fe372-aee9-4fb2-9c62-b84341af1c81] Running
	I0610 10:41:19.667144   21811 system_pods.go:61] "kindnet-rnn59" [9141e131-eebc-4f51-8b55-46ff649ffaee] Running
	I0610 10:41:19.667151   21811 system_pods.go:61] "kube-apiserver-ha-565925" [75b7b060-85f2-45ca-a58e-a42a8c2d7fab] Running
	I0610 10:41:19.667164   21811 system_pods.go:61] "kube-apiserver-ha-565925-m02" [a7e4eed5-4ada-4063-a8e1-f82ed820f684] Running
	I0610 10:41:19.667171   21811 system_pods.go:61] "kube-apiserver-ha-565925-m03" [225e7590-3610-4bce-9224-88a67f0f7226] Running
	I0610 10:41:19.667181   21811 system_pods.go:61] "kube-controller-manager-ha-565925" [cd41ddc9-22af-4789-a9ea-3663a5de415b] Running
	I0610 10:41:19.667190   21811 system_pods.go:61] "kube-controller-manager-ha-565925-m02" [6b2d5860-4e09-4eeb-a9e3-24952ec3fab4] Running
	I0610 10:41:19.667200   21811 system_pods.go:61] "kube-controller-manager-ha-565925-m03" [2f1dc404-5a14-4ced-ba6d-746e6cd75e57] Running
	I0610 10:41:19.667206   21811 system_pods.go:61] "kube-proxy-d44ft" [7a77472b-d577-4781-bc02-70dbe0c31659] Running
	I0610 10:41:19.667215   21811 system_pods.go:61] "kube-proxy-vbgnx" [f43735ae-adc0-4fe4-992e-b640b52886d7] Running
	I0610 10:41:19.667222   21811 system_pods.go:61] "kube-proxy-wdjhn" [da3ac11b-0906-4695-80b1-f3f4f1a34de1] Running
	I0610 10:41:19.667228   21811 system_pods.go:61] "kube-scheduler-ha-565925" [74663e0a-7f9e-4211-b165-39358cb3b3e2] Running
	I0610 10:41:19.667235   21811 system_pods.go:61] "kube-scheduler-ha-565925-m02" [745d6073-f0af-4aa5-9345-38c9b88dad69] Running
	I0610 10:41:19.667244   21811 system_pods.go:61] "kube-scheduler-ha-565925-m03" [844a6fd4-2d91-47fb-b692-c899c7461a32] Running
	I0610 10:41:19.667251   21811 system_pods.go:61] "kube-vip-ha-565925" [039ffa3e-aac6-4bdc-a576-0158c7fb283d] Running
	I0610 10:41:19.667260   21811 system_pods.go:61] "kube-vip-ha-565925-m02" [f28be16a-38b2-4746-8b18-ab0014783aad] Running
	I0610 10:41:19.667269   21811 system_pods.go:61] "kube-vip-ha-565925-m03" [de1604b6-d98b-4be7-a72e-5500cc89e497] Running
	I0610 10:41:19.667274   21811 system_pods.go:61] "storage-provisioner" [0ca60a36-c445-4520-b857-7df39dfed848] Running
	I0610 10:41:19.667283   21811 system_pods.go:74] duration metric: took 187.51707ms to wait for pod list to return data ...
	I0610 10:41:19.667297   21811 default_sa.go:34] waiting for default service account to be created ...
	I0610 10:41:19.853710   21811 request.go:629] Waited for 186.338285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I0610 10:41:19.853781   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I0610 10:41:19.853789   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.853798   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.853804   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.857692   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:19.857827   21811 default_sa.go:45] found service account: "default"
	I0610 10:41:19.857845   21811 default_sa.go:55] duration metric: took 190.537888ms for default service account to be created ...
	I0610 10:41:19.857853   21811 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 10:41:20.054008   21811 request.go:629] Waited for 196.075313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:41:20.054068   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:41:20.054073   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:20.054080   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:20.054086   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:20.061196   21811 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 10:41:20.067104   21811 system_pods.go:86] 24 kube-system pods found
	I0610 10:41:20.067130   21811 system_pods.go:89] "coredns-7db6d8ff4d-545cf" [7564efde-b96c-48b3-b194-bca695f7ae95] Running
	I0610 10:41:20.067136   21811 system_pods.go:89] "coredns-7db6d8ff4d-wn6nh" [9e47f047-e98b-48c8-8a33-8f790a3e8017] Running
	I0610 10:41:20.067140   21811 system_pods.go:89] "etcd-ha-565925" [527cd8fc-9ac8-4432-a265-910957e9268f] Running
	I0610 10:41:20.067144   21811 system_pods.go:89] "etcd-ha-565925-m02" [7068fe45-72fe-4204-8742-d8803e585954] Running
	I0610 10:41:20.067148   21811 system_pods.go:89] "etcd-ha-565925-m03" [91c6bcb4-59b4-4a31-a5e4-f32d9491b566] Running
	I0610 10:41:20.067153   21811 system_pods.go:89] "kindnet-9jv7q" [2f97ff84-bae1-4e63-9e9a-08e9e7afe68b] Running
	I0610 10:41:20.067157   21811 system_pods.go:89] "kindnet-9tcng" [c47fe372-aee9-4fb2-9c62-b84341af1c81] Running
	I0610 10:41:20.067161   21811 system_pods.go:89] "kindnet-rnn59" [9141e131-eebc-4f51-8b55-46ff649ffaee] Running
	I0610 10:41:20.067166   21811 system_pods.go:89] "kube-apiserver-ha-565925" [75b7b060-85f2-45ca-a58e-a42a8c2d7fab] Running
	I0610 10:41:20.067174   21811 system_pods.go:89] "kube-apiserver-ha-565925-m02" [a7e4eed5-4ada-4063-a8e1-f82ed820f684] Running
	I0610 10:41:20.067178   21811 system_pods.go:89] "kube-apiserver-ha-565925-m03" [225e7590-3610-4bce-9224-88a67f0f7226] Running
	I0610 10:41:20.067185   21811 system_pods.go:89] "kube-controller-manager-ha-565925" [cd41ddc9-22af-4789-a9ea-3663a5de415b] Running
	I0610 10:41:20.067190   21811 system_pods.go:89] "kube-controller-manager-ha-565925-m02" [6b2d5860-4e09-4eeb-a9e3-24952ec3fab4] Running
	I0610 10:41:20.067198   21811 system_pods.go:89] "kube-controller-manager-ha-565925-m03" [2f1dc404-5a14-4ced-ba6d-746e6cd75e57] Running
	I0610 10:41:20.067202   21811 system_pods.go:89] "kube-proxy-d44ft" [7a77472b-d577-4781-bc02-70dbe0c31659] Running
	I0610 10:41:20.067209   21811 system_pods.go:89] "kube-proxy-vbgnx" [f43735ae-adc0-4fe4-992e-b640b52886d7] Running
	I0610 10:41:20.067213   21811 system_pods.go:89] "kube-proxy-wdjhn" [da3ac11b-0906-4695-80b1-f3f4f1a34de1] Running
	I0610 10:41:20.067220   21811 system_pods.go:89] "kube-scheduler-ha-565925" [74663e0a-7f9e-4211-b165-39358cb3b3e2] Running
	I0610 10:41:20.067223   21811 system_pods.go:89] "kube-scheduler-ha-565925-m02" [745d6073-f0af-4aa5-9345-38c9b88dad69] Running
	I0610 10:41:20.067230   21811 system_pods.go:89] "kube-scheduler-ha-565925-m03" [844a6fd4-2d91-47fb-b692-c899c7461a32] Running
	I0610 10:41:20.067233   21811 system_pods.go:89] "kube-vip-ha-565925" [039ffa3e-aac6-4bdc-a576-0158c7fb283d] Running
	I0610 10:41:20.067239   21811 system_pods.go:89] "kube-vip-ha-565925-m02" [f28be16a-38b2-4746-8b18-ab0014783aad] Running
	I0610 10:41:20.067243   21811 system_pods.go:89] "kube-vip-ha-565925-m03" [de1604b6-d98b-4be7-a72e-5500cc89e497] Running
	I0610 10:41:20.067249   21811 system_pods.go:89] "storage-provisioner" [0ca60a36-c445-4520-b857-7df39dfed848] Running
	I0610 10:41:20.067254   21811 system_pods.go:126] duration metric: took 209.396723ms to wait for k8s-apps to be running ...
	I0610 10:41:20.067264   21811 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 10:41:20.067300   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:41:20.082850   21811 system_svc.go:56] duration metric: took 15.577071ms WaitForService to wait for kubelet
	I0610 10:41:20.082882   21811 kubeadm.go:576] duration metric: took 19.979901985s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:41:20.082908   21811 node_conditions.go:102] verifying NodePressure condition ...
	I0610 10:41:20.253501   21811 request.go:629] Waited for 170.515902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I0610 10:41:20.253562   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I0610 10:41:20.253570   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:20.253582   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:20.253591   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:20.257357   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:20.258463   21811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:41:20.258490   21811 node_conditions.go:123] node cpu capacity is 2
	I0610 10:41:20.258505   21811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:41:20.258510   21811 node_conditions.go:123] node cpu capacity is 2
	I0610 10:41:20.258515   21811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:41:20.258518   21811 node_conditions.go:123] node cpu capacity is 2
	I0610 10:41:20.258523   21811 node_conditions.go:105] duration metric: took 175.609245ms to run NodePressure ...
	I0610 10:41:20.258536   21811 start.go:240] waiting for startup goroutines ...
	I0610 10:41:20.258563   21811 start.go:254] writing updated cluster config ...
	I0610 10:41:20.258930   21811 ssh_runner.go:195] Run: rm -f paused
	I0610 10:41:20.312399   21811 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 10:41:20.314679   21811 out.go:177] * Done! kubectl is now configured to use "ha-565925" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.049655662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016290049628595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=777c940b-7ad2-4f1c-839a-279d1b5436b8 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.050329843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=209be331-44fd-4254-8639-dcbe5a3ebf3a name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.050385290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=209be331-44fd-4254-8639-dcbe5a3ebf3a name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.050625544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016084446089772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7132613c40918526f05a0d1ea655de838d95cdfc74880ab8c90e7b98b32ee7cc,PodSandboxId:de365696855f1fe15558874733bf40446cd8ab359b3d632ae71d8cd5f32d98b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718015930142271529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930175570021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930144728548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e9
8b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c76fc1da29c41233d9d8517a0d5b17f146c7cde3802483aab50bc3ba11b78b,PodSandboxId:71cfc7bcda08cf3e1c90d0f5cf5f33fc51fb4dd5f028ab6590d0b19f056460dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718015928620479055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171801592
5064900688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbb62793adf92dc3d7d5d72b02fb98e653c558237baa7067bce51a5b0c25553,PodSandboxId:235cdb6eec97308e5c02c06c504736e6bcecc139bc81369249fd408eb0a4a674,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17180159080
18315319,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7458bb04dd39e8e0618ded8278600c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718015904609356393,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718015904613208681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243a70e2c1f2d12414697f36420e1832aa5b0376a87efc3acc5785d8295da364,PodSandboxId:f17389e4e287341cc04675fc44f2af0a57d0270453e694289f6c820fa120ef66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718015904641357133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496,PodSandboxId:5319b527fdd15e4a549cd2140bbe1e0e473956046be736501f4f1692b6a0a208,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718015904537481240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=209be331-44fd-4254-8639-dcbe5a3ebf3a name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.091301594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b791c4d8-8955-4fd0-aac8-b55fe98b0a96 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.091375633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b791c4d8-8955-4fd0-aac8-b55fe98b0a96 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.092648319Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b4c5076-234d-4727-81f1-e4d7b7b2acba name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.093226324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016290093202397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b4c5076-234d-4727-81f1-e4d7b7b2acba name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.093654606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0721c86-9aa2-4100-8a30-0f08622ed48b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.093705183Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0721c86-9aa2-4100-8a30-0f08622ed48b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.094009333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016084446089772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7132613c40918526f05a0d1ea655de838d95cdfc74880ab8c90e7b98b32ee7cc,PodSandboxId:de365696855f1fe15558874733bf40446cd8ab359b3d632ae71d8cd5f32d98b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718015930142271529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930175570021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930144728548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e9
8b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c76fc1da29c41233d9d8517a0d5b17f146c7cde3802483aab50bc3ba11b78b,PodSandboxId:71cfc7bcda08cf3e1c90d0f5cf5f33fc51fb4dd5f028ab6590d0b19f056460dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718015928620479055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171801592
5064900688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbb62793adf92dc3d7d5d72b02fb98e653c558237baa7067bce51a5b0c25553,PodSandboxId:235cdb6eec97308e5c02c06c504736e6bcecc139bc81369249fd408eb0a4a674,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17180159080
18315319,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7458bb04dd39e8e0618ded8278600c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718015904609356393,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718015904613208681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243a70e2c1f2d12414697f36420e1832aa5b0376a87efc3acc5785d8295da364,PodSandboxId:f17389e4e287341cc04675fc44f2af0a57d0270453e694289f6c820fa120ef66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718015904641357133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496,PodSandboxId:5319b527fdd15e4a549cd2140bbe1e0e473956046be736501f4f1692b6a0a208,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718015904537481240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0721c86-9aa2-4100-8a30-0f08622ed48b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.140034878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40932772-5ec1-4ab2-ae39-814533eda217 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.140280922Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40932772-5ec1-4ab2-ae39-814533eda217 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.141651456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b86b0e7a-b16d-4136-b56d-b9def1cd0945 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.142240830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016290142215227,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b86b0e7a-b16d-4136-b56d-b9def1cd0945 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.143005525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=913040b0-1ec6-47d5-b5e0-986ea1557772 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.143076264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=913040b0-1ec6-47d5-b5e0-986ea1557772 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.143326225Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016084446089772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7132613c40918526f05a0d1ea655de838d95cdfc74880ab8c90e7b98b32ee7cc,PodSandboxId:de365696855f1fe15558874733bf40446cd8ab359b3d632ae71d8cd5f32d98b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718015930142271529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930175570021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930144728548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e9
8b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c76fc1da29c41233d9d8517a0d5b17f146c7cde3802483aab50bc3ba11b78b,PodSandboxId:71cfc7bcda08cf3e1c90d0f5cf5f33fc51fb4dd5f028ab6590d0b19f056460dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718015928620479055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171801592
5064900688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbb62793adf92dc3d7d5d72b02fb98e653c558237baa7067bce51a5b0c25553,PodSandboxId:235cdb6eec97308e5c02c06c504736e6bcecc139bc81369249fd408eb0a4a674,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17180159080
18315319,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7458bb04dd39e8e0618ded8278600c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718015904609356393,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718015904613208681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243a70e2c1f2d12414697f36420e1832aa5b0376a87efc3acc5785d8295da364,PodSandboxId:f17389e4e287341cc04675fc44f2af0a57d0270453e694289f6c820fa120ef66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718015904641357133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496,PodSandboxId:5319b527fdd15e4a549cd2140bbe1e0e473956046be736501f4f1692b6a0a208,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718015904537481240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=913040b0-1ec6-47d5-b5e0-986ea1557772 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.179229364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c46f28ff-a394-4d5e-91d2-d30db3e7bfaf name=/runtime.v1.RuntimeService/Version
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.179312115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c46f28ff-a394-4d5e-91d2-d30db3e7bfaf name=/runtime.v1.RuntimeService/Version
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.180550331Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8f5d6ed-01ab-4b8e-be56-fc25140239e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.181221416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016290181198437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8f5d6ed-01ab-4b8e-be56-fc25140239e1 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.181647224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37e7312f-4855-4bdf-8d02-b6d8d3a69e61 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.181699189Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37e7312f-4855-4bdf-8d02-b6d8d3a69e61 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:44:50 ha-565925 crio[681]: time="2024-06-10 10:44:50.181976456Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016084446089772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7132613c40918526f05a0d1ea655de838d95cdfc74880ab8c90e7b98b32ee7cc,PodSandboxId:de365696855f1fe15558874733bf40446cd8ab359b3d632ae71d8cd5f32d98b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718015930142271529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930175570021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930144728548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e9
8b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c76fc1da29c41233d9d8517a0d5b17f146c7cde3802483aab50bc3ba11b78b,PodSandboxId:71cfc7bcda08cf3e1c90d0f5cf5f33fc51fb4dd5f028ab6590d0b19f056460dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718015928620479055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171801592
5064900688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbb62793adf92dc3d7d5d72b02fb98e653c558237baa7067bce51a5b0c25553,PodSandboxId:235cdb6eec97308e5c02c06c504736e6bcecc139bc81369249fd408eb0a4a674,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17180159080
18315319,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7458bb04dd39e8e0618ded8278600c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718015904609356393,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718015904613208681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243a70e2c1f2d12414697f36420e1832aa5b0376a87efc3acc5785d8295da364,PodSandboxId:f17389e4e287341cc04675fc44f2af0a57d0270453e694289f6c820fa120ef66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718015904641357133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496,PodSandboxId:5319b527fdd15e4a549cd2140bbe1e0e473956046be736501f4f1692b6a0a208,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718015904537481240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37e7312f-4855-4bdf-8d02-b6d8d3a69e61 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e2874c04d7e60       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4f03a24f1c978       busybox-fc5497c4f-6wmkd
	1f037e4537f61       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   937195f055767       coredns-7db6d8ff4d-545cf
	534a412f3a743       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   b454f12ed3fe0       coredns-7db6d8ff4d-wn6nh
	7132613c40918       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   de365696855f1       storage-provisioner
	c7c76fc1da29c       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    6 minutes ago       Running             kindnet-cni               0                   71cfc7bcda08c       kindnet-rnn59
	fa492285e9f66       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      6 minutes ago       Running             kube-proxy                0                   9c2610533ce93       kube-proxy-wdjhn
	fbbb62793adf9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   235cdb6eec973       kube-vip-ha-565925
	243a70e2c1f2d       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      6 minutes ago       Running             kube-apiserver            0                   f17389e4e2873       kube-apiserver-ha-565925
	538119110afb1       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      6 minutes ago       Running             kube-scheduler            0                   1c1c2a5704369       kube-scheduler-ha-565925
	15b93b06d8221       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   ae49609366208       etcd-ha-565925
	bcf7ff93de6e7       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      6 minutes ago       Running             kube-controller-manager   0                   5319b527fdd15       kube-controller-manager-ha-565925
	
	
	==> coredns [1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163] <==
	[INFO] 127.0.0.1:34561 - 56492 "HINFO IN 3219957272136125807.6377571563397303703. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009858319s
	[INFO] 10.244.0.4:54950 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.012741474s
	[INFO] 10.244.1.2:48212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000372595s
	[INFO] 10.244.1.2:38672 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000558623s
	[INFO] 10.244.1.2:39378 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001712401s
	[INFO] 10.244.2.2:60283 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000168931s
	[INFO] 10.244.0.4:44797 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009875834s
	[INFO] 10.244.0.4:48555 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169499s
	[INFO] 10.244.0.4:59395 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177597s
	[INFO] 10.244.1.2:59265 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000530757s
	[INFO] 10.244.1.2:47710 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001604733s
	[INFO] 10.244.1.2:52315 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138586s
	[INFO] 10.244.2.2:55693 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155911s
	[INFO] 10.244.2.2:58799 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094891s
	[INFO] 10.244.2.2:42423 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109708s
	[INFO] 10.244.0.4:50874 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174304s
	[INFO] 10.244.1.2:48744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098356s
	[INFO] 10.244.1.2:57572 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107588s
	[INFO] 10.244.1.2:43906 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000582793s
	[INFO] 10.244.0.4:36933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083881s
	[INFO] 10.244.0.4:57895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011453s
	[INFO] 10.244.1.2:33157 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149048s
	[INFO] 10.244.1.2:51327 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000136605s
	[INFO] 10.244.1.2:57659 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126557s
	[INFO] 10.244.2.2:42606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153767s
	
	
	==> coredns [534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f] <==
	[INFO] 10.244.0.4:42272 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196791s
	[INFO] 10.244.1.2:51041 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144884s
	[INFO] 10.244.1.2:56818 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001759713s
	[INFO] 10.244.1.2:38288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.001994069s
	[INFO] 10.244.1.2:34752 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150866s
	[INFO] 10.244.1.2:40260 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146857s
	[INFO] 10.244.2.2:44655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154352s
	[INFO] 10.244.2.2:33459 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001816989s
	[INFO] 10.244.2.2:44738 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000324114s
	[INFO] 10.244.2.2:47736 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091876s
	[INFO] 10.244.2.2:44490 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001443467s
	[INFO] 10.244.0.4:55625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175656s
	[INFO] 10.244.0.4:39661 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080931s
	[INFO] 10.244.0.4:50296 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000636942s
	[INFO] 10.244.1.2:38824 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118172s
	[INFO] 10.244.2.2:42842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216365s
	[INFO] 10.244.2.2:59068 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011868s
	[INFO] 10.244.2.2:38486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000206394s
	[INFO] 10.244.2.2:33649 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110039s
	[INFO] 10.244.0.4:39573 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000202562s
	[INFO] 10.244.0.4:57326 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128886s
	[INFO] 10.244.1.2:39682 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000217002s
	[INFO] 10.244.2.2:39360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000367518s
	[INFO] 10.244.2.2:55914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000522453s
	[INFO] 10.244.2.2:54263 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00020711s
	
	
	==> describe nodes <==
	Name:               ha-565925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T10_38_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:44:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:41:34 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:41:34 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:41:34 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:41:34 +0000   Mon, 10 Jun 2024 10:38:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    ha-565925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 81e39b112b50436db5c7fc16ce8eb53e
	  System UUID:                81e39b11-2b50-436d-b5c7-fc16ce8eb53e
	  Boot ID:                    afd4fe8d-84f7-41ff-9890-dc78b1ff1343
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6wmkd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 coredns-7db6d8ff4d-545cf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m7s
	  kube-system                 coredns-7db6d8ff4d-wn6nh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m7s
	  kube-system                 etcd-ha-565925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m20s
	  kube-system                 kindnet-rnn59                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m6s
	  kube-system                 kube-apiserver-ha-565925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-controller-manager-ha-565925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-proxy-wdjhn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-scheduler-ha-565925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-vip-ha-565925                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m5s   kube-proxy       
	  Normal  Starting                 6m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m20s  kubelet          Node ha-565925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s  kubelet          Node ha-565925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s  kubelet          Node ha-565925 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m7s   node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal  NodeReady                6m1s   kubelet          Node ha-565925 status is now: NodeReady
	  Normal  RegisteredNode           4m46s  node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal  RegisteredNode           3m36s  node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	
	
	Name:               ha-565925-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_39_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:39:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:42:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 10 Jun 2024 10:41:50 +0000   Mon, 10 Jun 2024 10:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 10 Jun 2024 10:41:50 +0000   Mon, 10 Jun 2024 10:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 10 Jun 2024 10:41:50 +0000   Mon, 10 Jun 2024 10:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 10 Jun 2024 10:41:50 +0000   Mon, 10 Jun 2024 10:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-565925-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55a76fcaaea54bebb8694a2ff5e7d2ea
	  System UUID:                55a76fca-aea5-4beb-b869-4a2ff5e7d2ea
	  Boot ID:                    d5b6f0ad-b291-4951-bab9-e2cd70014f7f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8g67g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-565925-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m1s
	  kube-system                 kindnet-9jv7q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m3s
	  kube-system                 kube-apiserver-ha-565925-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-controller-manager-ha-565925-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-proxy-vbgnx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m3s
	  kube-system                 kube-scheduler-ha-565925-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-vip-ha-565925-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m3s (x8 over 5m3s)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m3s (x8 over 5m3s)  kubelet          Node ha-565925-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m3s (x7 over 5m3s)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m2s                 node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           4m46s                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           3m36s                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  NodeNotReady             97s                  node-controller  Node ha-565925-m02 status is now: NodeNotReady
	
	
	Name:               ha-565925-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_40_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:40:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:44:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:41:27 +0000   Mon, 10 Jun 2024 10:40:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:41:27 +0000   Mon, 10 Jun 2024 10:40:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:41:27 +0000   Mon, 10 Jun 2024 10:40:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:41:27 +0000   Mon, 10 Jun 2024 10:41:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-565925-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8de12ccd43b4441ac42fe5a4b57ed64
	  System UUID:                c8de12cc-d43b-4441-ac42-fe5a4b57ed64
	  Boot ID:                    d2c38454-f5bf-4fee-84c8-941e8e5709a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jmbg2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-565925-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m52s
	  kube-system                 kindnet-9tcng                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-apiserver-ha-565925-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-controller-manager-ha-565925-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-proxy-d44ft                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-scheduler-ha-565925-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-vip-ha-565925-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m49s                  kube-proxy       
	  Normal  Starting                 3m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node ha-565925-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node ha-565925-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node ha-565925-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	  Normal  RegisteredNode           3m36s                  node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	
	
	Name:               ha-565925-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_41_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:41:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:44:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:42:29 +0000   Mon, 10 Jun 2024 10:41:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:42:29 +0000   Mon, 10 Jun 2024 10:41:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:42:29 +0000   Mon, 10 Jun 2024 10:41:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:42:29 +0000   Mon, 10 Jun 2024 10:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    ha-565925-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5196e1f9b5684ae78368fe8d66c3d24c
	  System UUID:                5196e1f9-b568-4ae7-8368-fe8d66c3d24c
	  Boot ID:                    ffecf9d5-cc7c-4751-819f-473afd63d8a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-lkf5b       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-dpsbw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m52s)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m52s)  kubelet          Node ha-565925-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x2 over 2m52s)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-565925-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun10 10:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051910] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038738] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.451665] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jun10 10:38] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.529458] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.150837] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.061096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061390] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.176128] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.114890] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264219] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.909095] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +3.637727] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.061637] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.135890] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.082129] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.392312] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.014769] kauditd_printk_skb: 43 callbacks suppressed
	[  +9.917879] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd] <==
	{"level":"warn","ts":"2024-06-10T10:44:50.430602Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.446509Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.453941Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.458238Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.472863Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.480478Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.487537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.487829Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.491336Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.49495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.503253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.509658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.516234Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.519243Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.52209Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.530266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.53821Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.540118Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.548652Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.552596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.555428Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.561109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.567132Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.573419Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:44:50.587134Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:44:50 up 6 min,  0 users,  load average: 0.13, 0.27, 0.17
	Linux ha-565925 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c7c76fc1da29c41233d9d8517a0d5b17f146c7cde3802483aab50bc3ba11b78b] <==
	I0610 10:44:19.574271       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:44:29.590493       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:44:29.590585       1 main.go:227] handling current node
	I0610 10:44:29.590601       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:44:29.590609       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:44:29.590957       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0610 10:44:29.591026       1 main.go:250] Node ha-565925-m03 has CIDR [10.244.2.0/24] 
	I0610 10:44:29.591182       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:44:29.591265       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:44:39.598638       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:44:39.598680       1 main.go:227] handling current node
	I0610 10:44:39.598702       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:44:39.598708       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:44:39.598900       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0610 10:44:39.598922       1 main.go:250] Node ha-565925-m03 has CIDR [10.244.2.0/24] 
	I0610 10:44:39.598985       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:44:39.599004       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:44:49.605142       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:44:49.605234       1 main.go:227] handling current node
	I0610 10:44:49.605263       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:44:49.605281       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:44:49.605406       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0610 10:44:49.605427       1 main.go:250] Node ha-565925-m03 has CIDR [10.244.2.0/24] 
	I0610 10:44:49.605503       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:44:49.605522       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [243a70e2c1f2d12414697f36420e1832aa5b0376a87efc3acc5785d8295da364] <==
	I0610 10:38:29.237625       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0610 10:38:29.243937       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.208]
	I0610 10:38:29.244948       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 10:38:29.249817       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 10:38:29.636376       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 10:38:30.878540       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 10:38:30.900420       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0610 10:38:30.918282       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 10:38:43.392950       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0610 10:38:43.998070       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0610 10:41:26.033150       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58026: use of closed network connection
	E0610 10:41:26.218468       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58040: use of closed network connection
	E0610 10:41:26.623437       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58094: use of closed network connection
	E0610 10:41:26.821697       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58112: use of closed network connection
	E0610 10:41:27.006953       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58122: use of closed network connection
	E0610 10:41:27.183339       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58136: use of closed network connection
	E0610 10:41:27.374566       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58168: use of closed network connection
	E0610 10:41:27.583065       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58186: use of closed network connection
	E0610 10:41:27.867867       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58208: use of closed network connection
	E0610 10:41:28.042423       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58226: use of closed network connection
	E0610 10:41:28.222259       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58254: use of closed network connection
	E0610 10:41:28.407977       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58264: use of closed network connection
	E0610 10:41:28.591934       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58278: use of closed network connection
	E0610 10:41:28.765354       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58298: use of closed network connection
	W0610 10:42:39.253920       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.208 192.168.39.76]
	
	
	==> kube-controller-manager [bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496] <==
	I0610 10:39:48.292977       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m02"
	I0610 10:40:56.714956       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-565925-m03\" does not exist"
	I0610 10:40:56.729394       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-565925-m03" podCIDRs=["10.244.2.0/24"]
	I0610 10:40:58.317071       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m03"
	I0610 10:41:21.260124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.475697ms"
	I0610 10:41:21.334558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.092337ms"
	I0610 10:41:21.578712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.097947ms"
	I0610 10:41:21.621353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.528024ms"
	I0610 10:41:21.621483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.431µs"
	I0610 10:41:21.752445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.356922ms"
	I0610 10:41:21.752544       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.295µs"
	I0610 10:41:24.856380       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.71329ms"
	I0610 10:41:24.856660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.24µs"
	I0610 10:41:24.990853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.855541ms"
	I0610 10:41:24.991230       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.649µs"
	I0610 10:41:25.603425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.496252ms"
	I0610 10:41:25.603541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.117µs"
	E0610 10:41:58.368353       1 certificate_controller.go:146] Sync csr-hcqgx failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-hcqgx": the object has been modified; please apply your changes to the latest version and try again
	I0610 10:41:58.654308       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-565925-m04\" does not exist"
	I0610 10:41:58.675229       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-565925-m04" podCIDRs=["10.244.3.0/24"]
	I0610 10:42:03.579018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m04"
	I0610 10:42:09.621414       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565925-m04"
	I0610 10:43:13.605629       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565925-m04"
	I0610 10:43:13.686126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.735253ms"
	I0610 10:43:13.686420       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="126.528µs"
	
	
	==> kube-proxy [fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91] <==
	I0610 10:38:45.218661       1 server_linux.go:69] "Using iptables proxy"
	I0610 10:38:45.235348       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	I0610 10:38:45.279266       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 10:38:45.279353       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 10:38:45.279377       1 server_linux.go:165] "Using iptables Proxier"
	I0610 10:38:45.282213       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 10:38:45.282534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 10:38:45.282607       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:38:45.284663       1 config.go:192] "Starting service config controller"
	I0610 10:38:45.284789       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 10:38:45.284861       1 config.go:101] "Starting endpoint slice config controller"
	I0610 10:38:45.284923       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 10:38:45.286425       1 config.go:319] "Starting node config controller"
	I0610 10:38:45.286476       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 10:38:45.385453       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 10:38:45.385461       1 shared_informer.go:320] Caches are synced for service config
	I0610 10:38:45.386991       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82] <==
	E0610 10:38:28.841032       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 10:38:29.100887       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 10:38:29.101379       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 10:38:32.184602       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0610 10:40:56.778133       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9tcng\": pod kindnet-9tcng is already assigned to node \"ha-565925-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-9tcng" node="ha-565925-m03"
	E0610 10:40:56.778301       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c47fe372-aee9-4fb2-9c62-b84341af1c81(kube-system/kindnet-9tcng) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9tcng"
	E0610 10:40:56.778331       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9tcng\": pod kindnet-9tcng is already assigned to node \"ha-565925-m03\"" pod="kube-system/kindnet-9tcng"
	I0610 10:40:56.778371       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9tcng" node="ha-565925-m03"
	E0610 10:40:56.907191       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-l6zzp\": pod kindnet-l6zzp is already assigned to node \"ha-565925-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-l6zzp" node="ha-565925-m03"
	E0610 10:40:56.907263       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-l6zzp\": pod kindnet-l6zzp is already assigned to node \"ha-565925-m03\"" pod="kube-system/kindnet-l6zzp"
	I0610 10:41:21.202401       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="53b82f36-f185-4980-9722-bfd952e91286" pod="default/busybox-fc5497c4f-8g67g" assumedNode="ha-565925-m02" currentNode="ha-565925-m03"
	E0610 10:41:21.213967       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8g67g\": pod busybox-fc5497c4f-8g67g is already assigned to node \"ha-565925-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-8g67g" node="ha-565925-m03"
	E0610 10:41:21.214050       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 53b82f36-f185-4980-9722-bfd952e91286(default/busybox-fc5497c4f-8g67g) was assumed on ha-565925-m03 but assigned to ha-565925-m02" pod="default/busybox-fc5497c4f-8g67g"
	E0610 10:41:21.214076       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8g67g\": pod busybox-fc5497c4f-8g67g is already assigned to node \"ha-565925-m02\"" pod="default/busybox-fc5497c4f-8g67g"
	I0610 10:41:21.214097       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-8g67g" node="ha-565925-m02"
	E0610 10:41:21.261604       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6wmkd\": pod busybox-fc5497c4f-6wmkd is already assigned to node \"ha-565925\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-6wmkd" node="ha-565925"
	E0610 10:41:21.261683       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6wmkd\": pod busybox-fc5497c4f-6wmkd is already assigned to node \"ha-565925\"" pod="default/busybox-fc5497c4f-6wmkd"
	E0610 10:41:58.751185       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hr7qn\": pod kube-proxy-hr7qn is already assigned to node \"ha-565925-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hr7qn" node="ha-565925-m04"
	E0610 10:41:58.751457       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3bb3dab4-2341-44cc-b41f-4333e4bb1138(kube-system/kube-proxy-hr7qn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hr7qn"
	E0610 10:41:58.751511       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hr7qn\": pod kube-proxy-hr7qn is already assigned to node \"ha-565925-m04\"" pod="kube-system/kube-proxy-hr7qn"
	I0610 10:41:58.751611       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hr7qn" node="ha-565925-m04"
	E0610 10:41:58.753913       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lkf5b\": pod kindnet-lkf5b is already assigned to node \"ha-565925-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-lkf5b" node="ha-565925-m04"
	E0610 10:41:58.754717       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 087be749-ed61-402c-86cf-ccf5bc66b9f9(kube-system/kindnet-lkf5b) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lkf5b"
	E0610 10:41:58.756692       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lkf5b\": pod kindnet-lkf5b is already assigned to node \"ha-565925-m04\"" pod="kube-system/kindnet-lkf5b"
	I0610 10:41:58.756887       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lkf5b" node="ha-565925-m04"
	
	
	==> kubelet <==
	Jun 10 10:40:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:40:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:40:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:41:21 ha-565925 kubelet[1367]: I0610 10:41:21.254486    1367 topology_manager.go:215] "Topology Admit Handler" podUID="f8a1e0dc-e561-4def-9787-c5d0eda08fda" podNamespace="default" podName="busybox-fc5497c4f-6wmkd"
	Jun 10 10:41:21 ha-565925 kubelet[1367]: I0610 10:41:21.404369    1367 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9r9q\" (UniqueName: \"kubernetes.io/projected/f8a1e0dc-e561-4def-9787-c5d0eda08fda-kube-api-access-q9r9q\") pod \"busybox-fc5497c4f-6wmkd\" (UID: \"f8a1e0dc-e561-4def-9787-c5d0eda08fda\") " pod="default/busybox-fc5497c4f-6wmkd"
	Jun 10 10:41:30 ha-565925 kubelet[1367]: E0610 10:41:30.828972    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:41:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:41:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:41:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:41:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:42:30 ha-565925 kubelet[1367]: E0610 10:42:30.827923    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:42:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:42:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:42:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:42:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:43:30 ha-565925 kubelet[1367]: E0610 10:43:30.834914    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:43:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:43:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:43:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:43:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:44:30 ha-565925 kubelet[1367]: E0610 10:44:30.828865    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:44:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:44:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:44:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:44:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565925 -n ha-565925
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (61.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr: exit status 3 (3.194018346s)

                                                
                                                
-- stdout --
	ha-565925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565925-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:44:55.138144   26654 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:44:55.138403   26654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:44:55.138414   26654 out.go:304] Setting ErrFile to fd 2...
	I0610 10:44:55.138418   26654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:44:55.138604   26654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:44:55.138763   26654 out.go:298] Setting JSON to false
	I0610 10:44:55.138786   26654 mustload.go:65] Loading cluster: ha-565925
	I0610 10:44:55.138917   26654 notify.go:220] Checking for updates...
	I0610 10:44:55.139299   26654 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:44:55.139320   26654 status.go:255] checking status of ha-565925 ...
	I0610 10:44:55.139798   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:55.139851   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:55.158297   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42653
	I0610 10:44:55.158696   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:55.159262   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:55.159286   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:55.159727   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:55.159953   26654 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:44:55.161520   26654 status.go:330] ha-565925 host status = "Running" (err=<nil>)
	I0610 10:44:55.161537   26654 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:44:55.161910   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:55.161982   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:55.177084   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40783
	I0610 10:44:55.177487   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:55.177929   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:55.177949   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:55.178281   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:55.178419   26654 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:44:55.180897   26654 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:55.181374   26654 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:44:55.181410   26654 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:55.181518   26654 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:44:55.181798   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:55.181846   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:55.196443   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0610 10:44:55.196822   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:55.197331   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:55.197353   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:55.197753   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:55.198117   26654 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:44:55.198367   26654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:44:55.198395   26654 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:44:55.201650   26654 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:55.202148   26654 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:44:55.202182   26654 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:55.202305   26654 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:44:55.202499   26654 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:44:55.202695   26654 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:44:55.202847   26654 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:44:55.288049   26654 ssh_runner.go:195] Run: systemctl --version
	I0610 10:44:55.294271   26654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:44:55.309280   26654 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:44:55.309313   26654 api_server.go:166] Checking apiserver status ...
	I0610 10:44:55.309351   26654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:44:55.333414   26654 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0610 10:44:55.349217   26654 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:44:55.349269   26654 ssh_runner.go:195] Run: ls
	I0610 10:44:55.355092   26654 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:44:55.364374   26654 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:44:55.364394   26654 status.go:422] ha-565925 apiserver status = Running (err=<nil>)
	I0610 10:44:55.364402   26654 status.go:257] ha-565925 status: &{Name:ha-565925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:44:55.364416   26654 status.go:255] checking status of ha-565925-m02 ...
	I0610 10:44:55.364681   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:55.364716   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:55.380513   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43519
	I0610 10:44:55.380886   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:55.381319   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:55.381344   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:55.381622   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:55.381783   26654 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:44:55.383377   26654 status.go:330] ha-565925-m02 host status = "Running" (err=<nil>)
	I0610 10:44:55.383391   26654 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:44:55.383713   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:55.383756   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:55.399720   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42357
	I0610 10:44:55.400286   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:55.400792   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:55.400814   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:55.401178   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:55.401376   26654 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:44:55.404390   26654 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:55.404749   26654 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:44:55.404773   26654 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:55.404916   26654 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:44:55.405248   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:55.405290   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:55.420015   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41217
	I0610 10:44:55.420438   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:55.421012   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:55.421036   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:55.421360   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:55.421530   26654 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:44:55.421720   26654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:44:55.421743   26654 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:44:55.424305   26654 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:55.424704   26654 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:44:55.424728   26654 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:55.425148   26654 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:44:55.425292   26654 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:44:55.425381   26654 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:44:55.425506   26654 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	W0610 10:44:57.953241   26654 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.230:22: connect: no route to host
	W0610 10:44:57.953322   26654 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	E0610 10:44:57.953337   26654 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:44:57.953344   26654 status.go:257] ha-565925-m02 status: &{Name:ha-565925-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0610 10:44:57.953361   26654 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:44:57.953369   26654 status.go:255] checking status of ha-565925-m03 ...
	I0610 10:44:57.953659   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:57.953687   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:57.968565   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45099
	I0610 10:44:57.968965   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:57.969370   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:57.969390   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:57.969790   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:57.969987   26654 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:44:57.971650   26654 status.go:330] ha-565925-m03 host status = "Running" (err=<nil>)
	I0610 10:44:57.971663   26654 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:44:57.971934   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:57.971961   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:57.987203   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34179
	I0610 10:44:57.987671   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:57.988127   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:57.988143   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:57.988511   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:57.988728   26654 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:44:57.991482   26654 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:44:57.991903   26654 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:44:57.991935   26654 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:44:57.992119   26654 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:44:57.992388   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:57.992421   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:58.007141   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34887
	I0610 10:44:58.007660   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:58.008163   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:58.008179   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:58.008511   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:58.008740   26654 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:44:58.008918   26654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:44:58.008935   26654 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:44:58.011786   26654 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:44:58.012223   26654 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:44:58.012242   26654 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:44:58.012375   26654 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:44:58.012541   26654 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:44:58.012708   26654 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:44:58.012880   26654 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:44:58.087970   26654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:44:58.102956   26654 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:44:58.102990   26654 api_server.go:166] Checking apiserver status ...
	I0610 10:44:58.103035   26654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:44:58.117561   26654 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	W0610 10:44:58.126556   26654 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:44:58.126613   26654 ssh_runner.go:195] Run: ls
	I0610 10:44:58.130588   26654 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:44:58.136144   26654 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:44:58.136170   26654 status.go:422] ha-565925-m03 apiserver status = Running (err=<nil>)
	I0610 10:44:58.136181   26654 status.go:257] ha-565925-m03 status: &{Name:ha-565925-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:44:58.136195   26654 status.go:255] checking status of ha-565925-m04 ...
	I0610 10:44:58.136470   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:58.136493   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:58.151358   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37503
	I0610 10:44:58.151777   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:58.152311   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:58.152331   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:58.152630   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:58.152783   26654 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:44:58.154353   26654 status.go:330] ha-565925-m04 host status = "Running" (err=<nil>)
	I0610 10:44:58.154370   26654 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:44:58.154681   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:58.154708   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:58.169267   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34959
	I0610 10:44:58.169703   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:58.170145   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:58.170163   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:58.170445   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:58.170585   26654 main.go:141] libmachine: (ha-565925-m04) Calling .GetIP
	I0610 10:44:58.173318   26654 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:44:58.173760   26654 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:44:58.173794   26654 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:44:58.173982   26654 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:44:58.174364   26654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:58.174408   26654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:58.189326   26654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45103
	I0610 10:44:58.189709   26654 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:58.190204   26654 main.go:141] libmachine: Using API Version  1
	I0610 10:44:58.190229   26654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:58.190552   26654 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:58.190726   26654 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:44:58.190906   26654 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:44:58.190945   26654 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:44:58.193612   26654 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:44:58.194076   26654 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:44:58.194107   26654 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:44:58.194264   26654 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:44:58.194417   26654 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:44:58.194565   26654 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:44:58.194691   26654 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	I0610 10:44:58.276202   26654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:44:58.289900   26654 status.go:257] ha-565925-m04 status: &{Name:ha-565925-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr: exit status 3 (5.459072895s)

                                                
                                                
-- stdout --
	ha-565925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565925-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:44:59.031479   26753 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:44:59.031603   26753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:44:59.031614   26753 out.go:304] Setting ErrFile to fd 2...
	I0610 10:44:59.031618   26753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:44:59.031808   26753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:44:59.032020   26753 out.go:298] Setting JSON to false
	I0610 10:44:59.032045   26753 mustload.go:65] Loading cluster: ha-565925
	I0610 10:44:59.032193   26753 notify.go:220] Checking for updates...
	I0610 10:44:59.032522   26753 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:44:59.032536   26753 status.go:255] checking status of ha-565925 ...
	I0610 10:44:59.032934   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:59.033023   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:59.052238   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46193
	I0610 10:44:59.052700   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:59.053318   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:44:59.053338   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:59.053704   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:59.053902   26753 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:44:59.055477   26753 status.go:330] ha-565925 host status = "Running" (err=<nil>)
	I0610 10:44:59.055492   26753 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:44:59.055824   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:59.055859   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:59.070313   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44783
	I0610 10:44:59.070827   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:59.071323   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:44:59.071345   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:59.071629   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:59.071812   26753 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:44:59.074588   26753 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:59.075072   26753 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:44:59.075099   26753 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:59.075284   26753 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:44:59.075594   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:59.075636   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:59.090775   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42185
	I0610 10:44:59.091149   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:59.091570   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:44:59.091592   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:59.091906   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:59.092092   26753 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:44:59.092266   26753 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:44:59.092287   26753 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:44:59.095109   26753 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:59.095535   26753 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:44:59.095562   26753 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:44:59.095747   26753 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:44:59.095919   26753 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:44:59.096100   26753 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:44:59.096258   26753 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:44:59.180884   26753 ssh_runner.go:195] Run: systemctl --version
	I0610 10:44:59.187041   26753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:44:59.202842   26753 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:44:59.202866   26753 api_server.go:166] Checking apiserver status ...
	I0610 10:44:59.202895   26753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:44:59.219336   26753 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0610 10:44:59.229937   26753 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:44:59.229996   26753 ssh_runner.go:195] Run: ls
	I0610 10:44:59.236433   26753 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:44:59.247101   26753 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:44:59.247130   26753 status.go:422] ha-565925 apiserver status = Running (err=<nil>)
	I0610 10:44:59.247142   26753 status.go:257] ha-565925 status: &{Name:ha-565925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:44:59.247160   26753 status.go:255] checking status of ha-565925-m02 ...
	I0610 10:44:59.247442   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:59.247486   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:59.262162   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44087
	I0610 10:44:59.262659   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:59.263126   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:44:59.263146   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:59.263407   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:59.263601   26753 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:44:59.265131   26753 status.go:330] ha-565925-m02 host status = "Running" (err=<nil>)
	I0610 10:44:59.265150   26753 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:44:59.265416   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:59.265450   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:59.282317   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0610 10:44:59.282816   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:59.283421   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:44:59.283479   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:59.283925   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:59.284099   26753 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:44:59.287589   26753 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:59.288210   26753 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:44:59.288288   26753 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:59.288470   26753 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:44:59.288864   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:44:59.288900   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:44:59.303729   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33437
	I0610 10:44:59.304217   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:44:59.304725   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:44:59.304741   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:44:59.305089   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:44:59.305303   26753 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:44:59.305495   26753 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:44:59.305520   26753 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:44:59.308563   26753 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:59.309136   26753 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:44:59.309156   26753 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:44:59.309201   26753 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:44:59.309393   26753 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:44:59.309551   26753 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:44:59.309677   26753 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	W0610 10:45:01.025209   26753 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:45:01.025270   26753 retry.go:31] will retry after 163.873143ms: dial tcp 192.168.39.230:22: connect: no route to host
	W0610 10:45:04.097201   26753 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.230:22: connect: no route to host
	W0610 10:45:04.097322   26753 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	E0610 10:45:04.097346   26753 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:45:04.097353   26753 status.go:257] ha-565925-m02 status: &{Name:ha-565925-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0610 10:45:04.097371   26753 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:45:04.097379   26753 status.go:255] checking status of ha-565925-m03 ...
	I0610 10:45:04.097667   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:04.097702   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:04.113346   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32991
	I0610 10:45:04.113838   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:04.114371   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:45:04.114397   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:04.114689   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:04.114859   26753 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:45:04.116340   26753 status.go:330] ha-565925-m03 host status = "Running" (err=<nil>)
	I0610 10:45:04.116357   26753 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:04.116633   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:04.116671   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:04.131856   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I0610 10:45:04.132242   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:04.132672   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:45:04.132695   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:04.132978   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:04.133184   26753 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:45:04.135805   26753 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:04.136195   26753 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:04.136225   26753 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:04.136383   26753 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:04.136663   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:04.136699   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:04.151335   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41745
	I0610 10:45:04.151777   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:04.152166   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:45:04.152188   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:04.152458   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:04.152612   26753 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:45:04.152813   26753 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:04.152841   26753 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:45:04.155335   26753 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:04.155731   26753 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:04.155757   26753 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:04.155927   26753 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:45:04.156086   26753 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:45:04.156247   26753 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:45:04.156392   26753 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:45:04.231693   26753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:04.247128   26753 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:04.247154   26753 api_server.go:166] Checking apiserver status ...
	I0610 10:45:04.247193   26753 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:04.261201   26753 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	W0610 10:45:04.278660   26753 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:04.278716   26753 ssh_runner.go:195] Run: ls
	I0610 10:45:04.283459   26753 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:04.289585   26753 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:04.289605   26753 status.go:422] ha-565925-m03 apiserver status = Running (err=<nil>)
	I0610 10:45:04.289613   26753 status.go:257] ha-565925-m03 status: &{Name:ha-565925-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:04.289627   26753 status.go:255] checking status of ha-565925-m04 ...
	I0610 10:45:04.289997   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:04.290042   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:04.306373   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46031
	I0610 10:45:04.306719   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:04.307279   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:45:04.307306   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:04.307634   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:04.307860   26753 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:45:04.309487   26753 status.go:330] ha-565925-m04 host status = "Running" (err=<nil>)
	I0610 10:45:04.309502   26753 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:04.309761   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:04.309792   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:04.325133   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0610 10:45:04.325519   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:04.326042   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:45:04.326062   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:04.326361   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:04.326530   26753 main.go:141] libmachine: (ha-565925-m04) Calling .GetIP
	I0610 10:45:04.329436   26753 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:04.329824   26753 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:04.329849   26753 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:04.330014   26753 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:04.330313   26753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:04.330362   26753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:04.344826   26753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45741
	I0610 10:45:04.345313   26753 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:04.345752   26753 main.go:141] libmachine: Using API Version  1
	I0610 10:45:04.345772   26753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:04.346144   26753 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:04.346350   26753 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:45:04.346541   26753 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:04.346558   26753 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:45:04.349867   26753 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:04.350329   26753 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:04.350364   26753 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:04.350522   26753 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:45:04.350713   26753 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:45:04.350867   26753 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:45:04.351030   26753 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	I0610 10:45:04.431946   26753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:04.446508   26753 status.go:257] ha-565925-m04 status: &{Name:ha-565925-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr: exit status 3 (4.797684485s)

                                                
                                                
-- stdout --
	ha-565925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565925-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:45:05.826695   26871 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:45:05.826818   26871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:05.826827   26871 out.go:304] Setting ErrFile to fd 2...
	I0610 10:45:05.826832   26871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:05.827050   26871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:45:05.827229   26871 out.go:298] Setting JSON to false
	I0610 10:45:05.827254   26871 mustload.go:65] Loading cluster: ha-565925
	I0610 10:45:05.827608   26871 notify.go:220] Checking for updates...
	I0610 10:45:05.828617   26871 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:45:05.828755   26871 status.go:255] checking status of ha-565925 ...
	I0610 10:45:05.829526   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:05.829583   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:05.844174   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41441
	I0610 10:45:05.844635   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:05.845227   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:05.845249   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:05.845602   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:05.845772   26871 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:45:05.847240   26871 status.go:330] ha-565925 host status = "Running" (err=<nil>)
	I0610 10:45:05.847256   26871 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:05.847567   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:05.847612   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:05.863561   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0610 10:45:05.863994   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:05.864529   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:05.864553   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:05.864854   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:05.865054   26871 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:45:05.868520   26871 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:05.869126   26871 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:05.869165   26871 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:05.869324   26871 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:05.869660   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:05.869705   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:05.884408   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0610 10:45:05.884810   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:05.885304   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:05.885333   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:05.885664   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:05.885833   26871 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:45:05.886042   26871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:05.886072   26871 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:45:05.888721   26871 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:05.889139   26871 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:05.889181   26871 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:05.889403   26871 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:45:05.889602   26871 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:45:05.889755   26871 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:45:05.889892   26871 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:45:05.978259   26871 ssh_runner.go:195] Run: systemctl --version
	I0610 10:45:05.986413   26871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:06.006963   26871 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:06.006989   26871 api_server.go:166] Checking apiserver status ...
	I0610 10:45:06.007019   26871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:06.023646   26871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0610 10:45:06.034781   26871 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:06.034827   26871 ssh_runner.go:195] Run: ls
	I0610 10:45:06.039466   26871 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:06.043498   26871 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:06.043526   26871 status.go:422] ha-565925 apiserver status = Running (err=<nil>)
	I0610 10:45:06.043550   26871 status.go:257] ha-565925 status: &{Name:ha-565925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:06.043569   26871 status.go:255] checking status of ha-565925-m02 ...
	I0610 10:45:06.043835   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:06.043872   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:06.061175   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41475
	I0610 10:45:06.061612   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:06.062001   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:06.062026   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:06.062353   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:06.062539   26871 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:45:06.064200   26871 status.go:330] ha-565925-m02 host status = "Running" (err=<nil>)
	I0610 10:45:06.064219   26871 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:45:06.064617   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:06.064667   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:06.080076   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0610 10:45:06.080473   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:06.080932   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:06.080975   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:06.081354   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:06.081547   26871 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:45:06.084499   26871 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:06.085040   26871 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:45:06.085087   26871 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:06.085281   26871 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:45:06.085612   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:06.085655   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:06.100996   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I0610 10:45:06.101504   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:06.101959   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:06.101983   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:06.102276   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:06.102443   26871 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:45:06.102619   26871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:06.102640   26871 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:45:06.105412   26871 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:06.105831   26871 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:45:06.105855   26871 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:06.106032   26871 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:45:06.106177   26871 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:45:06.106314   26871 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:45:06.106423   26871 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	W0610 10:45:07.173212   26871 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:45:07.173269   26871 retry.go:31] will retry after 162.074998ms: dial tcp 192.168.39.230:22: connect: no route to host
	W0610 10:45:10.241297   26871 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.230:22: connect: no route to host
	W0610 10:45:10.241386   26871 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	E0610 10:45:10.241404   26871 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:45:10.241410   26871 status.go:257] ha-565925-m02 status: &{Name:ha-565925-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0610 10:45:10.241439   26871 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:45:10.241449   26871 status.go:255] checking status of ha-565925-m03 ...
	I0610 10:45:10.241743   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:10.241788   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:10.257627   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36655
	I0610 10:45:10.258057   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:10.258555   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:10.258577   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:10.258938   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:10.259178   26871 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:45:10.260944   26871 status.go:330] ha-565925-m03 host status = "Running" (err=<nil>)
	I0610 10:45:10.260981   26871 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:10.261255   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:10.261290   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:10.276211   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33647
	I0610 10:45:10.276609   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:10.277060   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:10.277084   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:10.277424   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:10.277638   26871 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:45:10.280655   26871 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:10.281166   26871 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:10.281195   26871 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:10.281377   26871 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:10.281698   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:10.281736   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:10.296275   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46157
	I0610 10:45:10.296712   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:10.297149   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:10.297170   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:10.297478   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:10.297660   26871 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:45:10.297853   26871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:10.297877   26871 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:45:10.300712   26871 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:10.301160   26871 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:10.301194   26871 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:10.301492   26871 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:45:10.301670   26871 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:45:10.301811   26871 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:45:10.301964   26871 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:45:10.376065   26871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:10.391510   26871 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:10.391534   26871 api_server.go:166] Checking apiserver status ...
	I0610 10:45:10.391567   26871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:10.411550   26871 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	W0610 10:45:10.421170   26871 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:10.421233   26871 ssh_runner.go:195] Run: ls
	I0610 10:45:10.425921   26871 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:10.430004   26871 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:10.430025   26871 status.go:422] ha-565925-m03 apiserver status = Running (err=<nil>)
	I0610 10:45:10.430033   26871 status.go:257] ha-565925-m03 status: &{Name:ha-565925-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:10.430048   26871 status.go:255] checking status of ha-565925-m04 ...
	I0610 10:45:10.430326   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:10.430356   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:10.445040   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0610 10:45:10.445424   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:10.445906   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:10.445931   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:10.446245   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:10.446463   26871 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:45:10.448325   26871 status.go:330] ha-565925-m04 host status = "Running" (err=<nil>)
	I0610 10:45:10.448341   26871 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:10.448605   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:10.448639   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:10.464479   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35943
	I0610 10:45:10.464874   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:10.465305   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:10.465327   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:10.465652   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:10.465811   26871 main.go:141] libmachine: (ha-565925-m04) Calling .GetIP
	I0610 10:45:10.468634   26871 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:10.469068   26871 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:10.469102   26871 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:10.469245   26871 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:10.469597   26871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:10.469635   26871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:10.484287   26871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
	I0610 10:45:10.484702   26871 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:10.485245   26871 main.go:141] libmachine: Using API Version  1
	I0610 10:45:10.485270   26871 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:10.485613   26871 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:10.485822   26871 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:45:10.486023   26871 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:10.486043   26871 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:45:10.488521   26871 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:10.488911   26871 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:10.488940   26871 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:10.489083   26871 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:45:10.489255   26871 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:45:10.489440   26871 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:45:10.489571   26871 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	I0610 10:45:10.568098   26871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:10.581812   26871 status.go:257] ha-565925-m04 status: &{Name:ha-565925-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr: exit status 3 (3.751713924s)

                                                
                                                
-- stdout --
	ha-565925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565925-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:45:13.368466   26972 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:45:13.368728   26972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:13.368739   26972 out.go:304] Setting ErrFile to fd 2...
	I0610 10:45:13.368743   26972 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:13.368907   26972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:45:13.369128   26972 out.go:298] Setting JSON to false
	I0610 10:45:13.369153   26972 mustload.go:65] Loading cluster: ha-565925
	I0610 10:45:13.369265   26972 notify.go:220] Checking for updates...
	I0610 10:45:13.369546   26972 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:45:13.369560   26972 status.go:255] checking status of ha-565925 ...
	I0610 10:45:13.370064   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:13.370133   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:13.388620   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35471
	I0610 10:45:13.389146   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:13.389789   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:13.389808   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:13.390247   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:13.390490   26972 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:45:13.393798   26972 status.go:330] ha-565925 host status = "Running" (err=<nil>)
	I0610 10:45:13.393817   26972 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:13.394142   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:13.394182   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:13.411161   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39247
	I0610 10:45:13.411697   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:13.412343   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:13.412361   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:13.412906   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:13.413108   26972 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:45:13.416583   26972 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:13.417064   26972 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:13.417095   26972 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:13.417248   26972 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:13.417520   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:13.417557   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:13.436416   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0610 10:45:13.436894   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:13.437407   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:13.437429   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:13.437816   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:13.438024   26972 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:45:13.438206   26972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:13.438225   26972 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:45:13.441136   26972 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:13.441515   26972 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:13.441547   26972 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:13.441675   26972 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:45:13.441847   26972 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:45:13.442005   26972 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:45:13.442100   26972 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:45:13.545210   26972 ssh_runner.go:195] Run: systemctl --version
	I0610 10:45:13.551325   26972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:13.567115   26972 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:13.567141   26972 api_server.go:166] Checking apiserver status ...
	I0610 10:45:13.567172   26972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:13.582051   26972 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0610 10:45:13.591990   26972 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:13.592051   26972 ssh_runner.go:195] Run: ls
	I0610 10:45:13.597099   26972 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:13.601338   26972 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:13.601363   26972 status.go:422] ha-565925 apiserver status = Running (err=<nil>)
	I0610 10:45:13.601375   26972 status.go:257] ha-565925 status: &{Name:ha-565925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:13.601401   26972 status.go:255] checking status of ha-565925-m02 ...
	I0610 10:45:13.601686   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:13.601728   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:13.617104   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39039
	I0610 10:45:13.617555   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:13.618008   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:13.618028   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:13.618367   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:13.618536   26972 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:45:13.620200   26972 status.go:330] ha-565925-m02 host status = "Running" (err=<nil>)
	I0610 10:45:13.620217   26972 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:45:13.620519   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:13.620567   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:13.636698   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42661
	I0610 10:45:13.637103   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:13.637550   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:13.637575   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:13.637898   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:13.638116   26972 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:45:13.640857   26972 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:13.641258   26972 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:45:13.641288   26972 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:13.641412   26972 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:45:13.641680   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:13.641714   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:13.657375   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33519
	I0610 10:45:13.657827   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:13.658294   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:13.658320   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:13.658609   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:13.658781   26972 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:45:13.658963   26972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:13.658987   26972 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:45:13.661850   26972 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:13.662329   26972 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:45:13.662358   26972 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:13.662525   26972 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:45:13.662682   26972 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:45:13.662870   26972 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:45:13.663018   26972 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	W0610 10:45:16.737171   26972 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.230:22: connect: no route to host
	W0610 10:45:16.737258   26972 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	E0610 10:45:16.737271   26972 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:45:16.737277   26972 status.go:257] ha-565925-m02 status: &{Name:ha-565925-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0610 10:45:16.737294   26972 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:45:16.737301   26972 status.go:255] checking status of ha-565925-m03 ...
	I0610 10:45:16.737613   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:16.737651   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:16.752165   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0610 10:45:16.752559   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:16.753044   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:16.753071   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:16.753377   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:16.753585   26972 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:45:16.755254   26972 status.go:330] ha-565925-m03 host status = "Running" (err=<nil>)
	I0610 10:45:16.755272   26972 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:16.755599   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:16.755644   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:16.770100   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36689
	I0610 10:45:16.770598   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:16.771102   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:16.771123   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:16.771442   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:16.771620   26972 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:45:16.773981   26972 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:16.774428   26972 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:16.774453   26972 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:16.774562   26972 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:16.774841   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:16.774873   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:16.789414   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35429
	I0610 10:45:16.789792   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:16.790240   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:16.790264   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:16.790593   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:16.790793   26972 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:45:16.790954   26972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:16.790971   26972 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:45:16.793629   26972 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:16.794067   26972 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:16.794101   26972 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:16.794240   26972 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:45:16.794425   26972 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:45:16.794576   26972 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:45:16.794689   26972 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:45:16.872779   26972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:16.887847   26972 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:16.887874   26972 api_server.go:166] Checking apiserver status ...
	I0610 10:45:16.887916   26972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:16.901972   26972 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	W0610 10:45:16.911349   26972 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:16.911398   26972 ssh_runner.go:195] Run: ls
	I0610 10:45:16.915445   26972 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:16.919811   26972 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:16.919833   26972 status.go:422] ha-565925-m03 apiserver status = Running (err=<nil>)
	I0610 10:45:16.919841   26972 status.go:257] ha-565925-m03 status: &{Name:ha-565925-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:16.919856   26972 status.go:255] checking status of ha-565925-m04 ...
	I0610 10:45:16.920130   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:16.920161   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:16.935272   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0610 10:45:16.935746   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:16.936213   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:16.936237   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:16.936540   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:16.936756   26972 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:45:16.938489   26972 status.go:330] ha-565925-m04 host status = "Running" (err=<nil>)
	I0610 10:45:16.938504   26972 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:16.938775   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:16.938812   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:16.953854   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
	I0610 10:45:16.954272   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:16.954701   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:16.954724   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:16.955036   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:16.955197   26972 main.go:141] libmachine: (ha-565925-m04) Calling .GetIP
	I0610 10:45:16.957785   26972 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:16.958200   26972 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:16.958224   26972 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:16.958402   26972 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:16.958710   26972 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:16.958752   26972 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:16.973992   26972 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I0610 10:45:16.974345   26972 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:16.974822   26972 main.go:141] libmachine: Using API Version  1
	I0610 10:45:16.974844   26972 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:16.975168   26972 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:16.975364   26972 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:45:16.975582   26972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:16.975603   26972 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:45:16.978452   26972 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:16.978855   26972 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:16.978875   26972 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:16.979029   26972 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:45:16.979191   26972 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:45:16.979321   26972 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:45:16.979444   26972 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	I0610 10:45:17.060041   26972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:17.073381   26972 status.go:257] ha-565925-m04 status: &{Name:ha-565925-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr: exit status 3 (3.728032124s)

                                                
                                                
-- stdout --
	ha-565925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-565925-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:45:20.969029   27088 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:45:20.969242   27088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:20.969250   27088 out.go:304] Setting ErrFile to fd 2...
	I0610 10:45:20.969254   27088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:20.969422   27088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:45:20.969572   27088 out.go:298] Setting JSON to false
	I0610 10:45:20.969596   27088 mustload.go:65] Loading cluster: ha-565925
	I0610 10:45:20.969648   27088 notify.go:220] Checking for updates...
	I0610 10:45:20.970005   27088 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:45:20.970023   27088 status.go:255] checking status of ha-565925 ...
	I0610 10:45:20.970424   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:20.970481   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:20.988654   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36649
	I0610 10:45:20.989088   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:20.989674   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:20.989700   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:20.989983   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:20.990189   27088 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:45:20.991762   27088 status.go:330] ha-565925 host status = "Running" (err=<nil>)
	I0610 10:45:20.991779   27088 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:20.992057   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:20.992105   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:21.008645   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0610 10:45:21.009138   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:21.009606   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:21.009631   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:21.010038   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:21.010219   27088 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:45:21.013295   27088 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:21.013768   27088 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:21.013801   27088 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:21.014058   27088 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:21.014387   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:21.014428   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:21.028926   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42329
	I0610 10:45:21.029351   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:21.029797   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:21.029824   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:21.030132   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:21.030307   27088 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:45:21.030492   27088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:21.030513   27088 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:45:21.033783   27088 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:21.034225   27088 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:21.034258   27088 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:21.034429   27088 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:45:21.034628   27088 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:45:21.034879   27088 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:45:21.035023   27088 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:45:21.116540   27088 ssh_runner.go:195] Run: systemctl --version
	I0610 10:45:21.122389   27088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:21.141508   27088 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:21.141544   27088 api_server.go:166] Checking apiserver status ...
	I0610 10:45:21.141585   27088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:21.159991   27088 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0610 10:45:21.173468   27088 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:21.173515   27088 ssh_runner.go:195] Run: ls
	I0610 10:45:21.179551   27088 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:21.183669   27088 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:21.183692   27088 status.go:422] ha-565925 apiserver status = Running (err=<nil>)
	I0610 10:45:21.183717   27088 status.go:257] ha-565925 status: &{Name:ha-565925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:21.183733   27088 status.go:255] checking status of ha-565925-m02 ...
	I0610 10:45:21.184137   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:21.184187   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:21.199675   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38117
	I0610 10:45:21.200084   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:21.200617   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:21.200637   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:21.200972   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:21.201169   27088 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:45:21.202900   27088 status.go:330] ha-565925-m02 host status = "Running" (err=<nil>)
	I0610 10:45:21.202917   27088 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:45:21.203233   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:21.203269   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:21.218181   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0610 10:45:21.218660   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:21.219226   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:21.219251   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:21.219608   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:21.219804   27088 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:45:21.222502   27088 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:21.222862   27088 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:45:21.222889   27088 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:21.223036   27088 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:45:21.223309   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:21.223342   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:21.238451   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37755
	I0610 10:45:21.238815   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:21.239250   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:21.239273   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:21.239611   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:21.239821   27088 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:45:21.240067   27088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:21.240090   27088 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:45:21.243145   27088 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:21.243644   27088 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:45:21.243667   27088 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:45:21.243780   27088 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:45:21.243958   27088 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:45:21.244138   27088 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:45:21.244281   27088 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	W0610 10:45:24.321210   27088 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.230:22: connect: no route to host
	W0610 10:45:24.321320   27088 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	E0610 10:45:24.321349   27088 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:45:24.321359   27088 status.go:257] ha-565925-m02 status: &{Name:ha-565925-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0610 10:45:24.321377   27088 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	I0610 10:45:24.321385   27088 status.go:255] checking status of ha-565925-m03 ...
	I0610 10:45:24.321675   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:24.321713   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:24.336573   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0610 10:45:24.337053   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:24.337522   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:24.337545   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:24.337834   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:24.337985   27088 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:45:24.339603   27088 status.go:330] ha-565925-m03 host status = "Running" (err=<nil>)
	I0610 10:45:24.339618   27088 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:24.339886   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:24.339918   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:24.354832   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43431
	I0610 10:45:24.355181   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:24.355625   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:24.355647   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:24.355932   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:24.356117   27088 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:45:24.359249   27088 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:24.359686   27088 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:24.359723   27088 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:24.360067   27088 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:24.360460   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:24.360514   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:24.374919   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33225
	I0610 10:45:24.375285   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:24.375756   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:24.375783   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:24.376121   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:24.376326   27088 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:45:24.376524   27088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:24.376548   27088 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:45:24.379271   27088 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:24.379677   27088 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:24.379702   27088 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:24.379852   27088 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:45:24.379994   27088 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:45:24.380180   27088 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:45:24.380344   27088 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:45:24.456761   27088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:24.470320   27088 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:24.470350   27088 api_server.go:166] Checking apiserver status ...
	I0610 10:45:24.470381   27088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:24.482979   27088 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	W0610 10:45:24.492067   27088 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:24.492128   27088 ssh_runner.go:195] Run: ls
	I0610 10:45:24.496072   27088 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:24.501974   27088 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:24.501999   27088 status.go:422] ha-565925-m03 apiserver status = Running (err=<nil>)
	I0610 10:45:24.502007   27088 status.go:257] ha-565925-m03 status: &{Name:ha-565925-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:24.502021   27088 status.go:255] checking status of ha-565925-m04 ...
	I0610 10:45:24.502345   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:24.502388   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:24.517062   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I0610 10:45:24.517568   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:24.518093   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:24.518109   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:24.518476   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:24.518725   27088 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:45:24.520399   27088 status.go:330] ha-565925-m04 host status = "Running" (err=<nil>)
	I0610 10:45:24.520416   27088 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:24.520719   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:24.520761   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:24.535524   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I0610 10:45:24.535884   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:24.536300   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:24.536321   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:24.536593   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:24.536736   27088 main.go:141] libmachine: (ha-565925-m04) Calling .GetIP
	I0610 10:45:24.539370   27088 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:24.539783   27088 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:24.539815   27088 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:24.540025   27088 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:24.540415   27088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:24.540458   27088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:24.555226   27088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45615
	I0610 10:45:24.555656   27088 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:24.556137   27088 main.go:141] libmachine: Using API Version  1
	I0610 10:45:24.556159   27088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:24.556455   27088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:24.556607   27088 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:45:24.556769   27088 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:24.556786   27088 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:45:24.559248   27088 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:24.559656   27088 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:24.559684   27088 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:24.559862   27088 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:45:24.560033   27088 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:45:24.560189   27088 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:45:24.560326   27088 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	I0610 10:45:24.640333   27088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:24.654268   27088 status.go:257] ha-565925-m04 status: &{Name:ha-565925-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr: exit status 7 (610.266658ms)

                                                
                                                
-- stdout --
	ha-565925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-565925-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:45:31.665342   27224 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:45:31.665599   27224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:31.665608   27224 out.go:304] Setting ErrFile to fd 2...
	I0610 10:45:31.665615   27224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:31.665839   27224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:45:31.666017   27224 out.go:298] Setting JSON to false
	I0610 10:45:31.666041   27224 mustload.go:65] Loading cluster: ha-565925
	I0610 10:45:31.666160   27224 notify.go:220] Checking for updates...
	I0610 10:45:31.666525   27224 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:45:31.666543   27224 status.go:255] checking status of ha-565925 ...
	I0610 10:45:31.666971   27224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:31.667054   27224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:31.683102   27224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33869
	I0610 10:45:31.683495   27224 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:31.684034   27224 main.go:141] libmachine: Using API Version  1
	I0610 10:45:31.684055   27224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:31.684365   27224 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:31.684568   27224 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:45:31.686317   27224 status.go:330] ha-565925 host status = "Running" (err=<nil>)
	I0610 10:45:31.686331   27224 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:31.686642   27224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:31.686679   27224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:31.701928   27224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35777
	I0610 10:45:31.702407   27224 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:31.702932   27224 main.go:141] libmachine: Using API Version  1
	I0610 10:45:31.702952   27224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:31.703350   27224 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:31.703599   27224 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:45:31.706328   27224 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:31.706770   27224 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:31.706797   27224 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:31.707010   27224 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:31.707328   27224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:31.707379   27224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:31.721720   27224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36293
	I0610 10:45:31.722052   27224 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:31.722496   27224 main.go:141] libmachine: Using API Version  1
	I0610 10:45:31.722516   27224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:31.722774   27224 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:31.722954   27224 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:45:31.723145   27224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:31.723165   27224 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:45:31.726279   27224 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:31.726757   27224 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:31.726777   27224 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:31.726975   27224 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:45:31.727155   27224 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:45:31.727309   27224 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:45:31.727441   27224 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:45:31.813234   27224 ssh_runner.go:195] Run: systemctl --version
	I0610 10:45:31.819359   27224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:31.834693   27224 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:31.834721   27224 api_server.go:166] Checking apiserver status ...
	I0610 10:45:31.834752   27224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:31.848434   27224 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0610 10:45:31.858539   27224 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:31.858594   27224 ssh_runner.go:195] Run: ls
	I0610 10:45:31.862769   27224 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:31.868074   27224 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:31.868100   27224 status.go:422] ha-565925 apiserver status = Running (err=<nil>)
	I0610 10:45:31.868111   27224 status.go:257] ha-565925 status: &{Name:ha-565925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:31.868131   27224 status.go:255] checking status of ha-565925-m02 ...
	I0610 10:45:31.868472   27224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:31.868498   27224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:31.882802   27224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I0610 10:45:31.883178   27224 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:31.883813   27224 main.go:141] libmachine: Using API Version  1
	I0610 10:45:31.883835   27224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:31.884148   27224 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:31.884329   27224 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:45:31.885890   27224 status.go:330] ha-565925-m02 host status = "Stopped" (err=<nil>)
	I0610 10:45:31.885906   27224 status.go:343] host is not running, skipping remaining checks
	I0610 10:45:31.885917   27224 status.go:257] ha-565925-m02 status: &{Name:ha-565925-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:31.885938   27224 status.go:255] checking status of ha-565925-m03 ...
	I0610 10:45:31.886335   27224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:31.886370   27224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:31.900663   27224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0610 10:45:31.901071   27224 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:31.901521   27224 main.go:141] libmachine: Using API Version  1
	I0610 10:45:31.901544   27224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:31.901856   27224 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:31.902032   27224 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:45:31.903582   27224 status.go:330] ha-565925-m03 host status = "Running" (err=<nil>)
	I0610 10:45:31.903602   27224 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:31.904015   27224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:31.904062   27224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:31.919770   27224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45959
	I0610 10:45:31.920129   27224 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:31.920608   27224 main.go:141] libmachine: Using API Version  1
	I0610 10:45:31.920631   27224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:31.921001   27224 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:31.921177   27224 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:45:31.923942   27224 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:31.924375   27224 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:31.924405   27224 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:31.924500   27224 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:31.924786   27224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:31.924830   27224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:31.940068   27224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0610 10:45:31.940442   27224 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:31.940886   27224 main.go:141] libmachine: Using API Version  1
	I0610 10:45:31.940908   27224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:31.941222   27224 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:31.941409   27224 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:45:31.941606   27224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:31.941634   27224 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:45:31.944641   27224 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:31.945096   27224 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:31.945125   27224 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:31.945287   27224 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:45:31.945496   27224 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:45:31.945656   27224 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:45:31.945825   27224 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:45:32.029282   27224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:32.045685   27224 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:32.045723   27224 api_server.go:166] Checking apiserver status ...
	I0610 10:45:32.045765   27224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:32.060318   27224 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	W0610 10:45:32.070400   27224 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:32.070452   27224 ssh_runner.go:195] Run: ls
	I0610 10:45:32.074850   27224 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:32.078863   27224 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:32.078884   27224 status.go:422] ha-565925-m03 apiserver status = Running (err=<nil>)
	I0610 10:45:32.078892   27224 status.go:257] ha-565925-m03 status: &{Name:ha-565925-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:32.078907   27224 status.go:255] checking status of ha-565925-m04 ...
	I0610 10:45:32.079214   27224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:32.079247   27224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:32.094595   27224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40047
	I0610 10:45:32.094991   27224 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:32.095488   27224 main.go:141] libmachine: Using API Version  1
	I0610 10:45:32.095508   27224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:32.095916   27224 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:32.096117   27224 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:45:32.097713   27224 status.go:330] ha-565925-m04 host status = "Running" (err=<nil>)
	I0610 10:45:32.097728   27224 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:32.098011   27224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:32.098043   27224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:32.112142   27224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33235
	I0610 10:45:32.112585   27224 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:32.113045   27224 main.go:141] libmachine: Using API Version  1
	I0610 10:45:32.113064   27224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:32.113386   27224 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:32.113565   27224 main.go:141] libmachine: (ha-565925-m04) Calling .GetIP
	I0610 10:45:32.116108   27224 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:32.116469   27224 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:32.116491   27224 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:32.116601   27224 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:32.116991   27224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:32.117033   27224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:32.131802   27224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34253
	I0610 10:45:32.132195   27224 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:32.132594   27224 main.go:141] libmachine: Using API Version  1
	I0610 10:45:32.132611   27224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:32.132866   27224 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:32.133057   27224 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:45:32.133197   27224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:32.133222   27224 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:45:32.135926   27224 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:32.136383   27224 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:32.136402   27224 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:32.136552   27224 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:45:32.136694   27224 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:45:32.136809   27224 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:45:32.136915   27224 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	I0610 10:45:32.219837   27224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:32.234106   27224 status.go:257] ha-565925-m04 status: &{Name:ha-565925-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr: exit status 7 (645.191658ms)

                                                
                                                
-- stdout --
	ha-565925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-565925-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:45:43.608081   27329 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:45:43.608384   27329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:43.608397   27329 out.go:304] Setting ErrFile to fd 2...
	I0610 10:45:43.608404   27329 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:43.608708   27329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:45:43.608979   27329 out.go:298] Setting JSON to false
	I0610 10:45:43.609013   27329 mustload.go:65] Loading cluster: ha-565925
	I0610 10:45:43.609163   27329 notify.go:220] Checking for updates...
	I0610 10:45:43.609587   27329 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:45:43.609611   27329 status.go:255] checking status of ha-565925 ...
	I0610 10:45:43.610209   27329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:43.610284   27329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:43.632223   27329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36077
	I0610 10:45:43.632687   27329 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:43.633275   27329 main.go:141] libmachine: Using API Version  1
	I0610 10:45:43.633305   27329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:43.633630   27329 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:43.633809   27329 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:45:43.635413   27329 status.go:330] ha-565925 host status = "Running" (err=<nil>)
	I0610 10:45:43.635441   27329 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:43.635857   27329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:43.635904   27329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:43.651393   27329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45277
	I0610 10:45:43.651815   27329 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:43.652303   27329 main.go:141] libmachine: Using API Version  1
	I0610 10:45:43.652326   27329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:43.652621   27329 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:43.652814   27329 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:45:43.656037   27329 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:43.656535   27329 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:43.656564   27329 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:43.656667   27329 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:43.656975   27329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:43.657025   27329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:43.672886   27329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45653
	I0610 10:45:43.673355   27329 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:43.673793   27329 main.go:141] libmachine: Using API Version  1
	I0610 10:45:43.673814   27329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:43.674232   27329 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:43.674449   27329 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:45:43.674657   27329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:43.674698   27329 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:45:43.677445   27329 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:43.677944   27329 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:43.677969   27329 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:43.678252   27329 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:45:43.678416   27329 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:45:43.678554   27329 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:45:43.678656   27329 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:45:43.770682   27329 ssh_runner.go:195] Run: systemctl --version
	I0610 10:45:43.779147   27329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:43.795912   27329 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:43.795951   27329 api_server.go:166] Checking apiserver status ...
	I0610 10:45:43.795992   27329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:43.815176   27329 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0610 10:45:43.825541   27329 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:43.825603   27329 ssh_runner.go:195] Run: ls
	I0610 10:45:43.829992   27329 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:43.837182   27329 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:43.837209   27329 status.go:422] ha-565925 apiserver status = Running (err=<nil>)
	I0610 10:45:43.837220   27329 status.go:257] ha-565925 status: &{Name:ha-565925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:43.837241   27329 status.go:255] checking status of ha-565925-m02 ...
	I0610 10:45:43.837642   27329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:43.837691   27329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:43.854178   27329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44237
	I0610 10:45:43.854678   27329 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:43.855189   27329 main.go:141] libmachine: Using API Version  1
	I0610 10:45:43.855214   27329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:43.855507   27329 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:43.855782   27329 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:45:43.857465   27329 status.go:330] ha-565925-m02 host status = "Stopped" (err=<nil>)
	I0610 10:45:43.857477   27329 status.go:343] host is not running, skipping remaining checks
	I0610 10:45:43.857483   27329 status.go:257] ha-565925-m02 status: &{Name:ha-565925-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:43.857502   27329 status.go:255] checking status of ha-565925-m03 ...
	I0610 10:45:43.857791   27329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:43.857829   27329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:43.872944   27329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I0610 10:45:43.873390   27329 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:43.873839   27329 main.go:141] libmachine: Using API Version  1
	I0610 10:45:43.873861   27329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:43.874183   27329 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:43.874381   27329 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:45:43.875941   27329 status.go:330] ha-565925-m03 host status = "Running" (err=<nil>)
	I0610 10:45:43.875958   27329 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:43.876229   27329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:43.876268   27329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:43.891696   27329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38523
	I0610 10:45:43.892114   27329 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:43.892533   27329 main.go:141] libmachine: Using API Version  1
	I0610 10:45:43.892559   27329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:43.892885   27329 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:43.893084   27329 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:45:43.896247   27329 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:43.896666   27329 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:43.896695   27329 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:43.896847   27329 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:43.897195   27329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:43.897240   27329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:43.912875   27329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
	I0610 10:45:43.913307   27329 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:43.913735   27329 main.go:141] libmachine: Using API Version  1
	I0610 10:45:43.913757   27329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:43.914009   27329 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:43.914202   27329 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:45:43.914387   27329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:43.914411   27329 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:45:43.917224   27329 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:43.917677   27329 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:43.917694   27329 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:43.917862   27329 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:45:43.917998   27329 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:45:43.918114   27329 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:45:43.918282   27329 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:45:44.000841   27329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:44.014957   27329 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:44.014983   27329 api_server.go:166] Checking apiserver status ...
	I0610 10:45:44.015021   27329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:44.027847   27329 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	W0610 10:45:44.037065   27329 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:44.037111   27329 ssh_runner.go:195] Run: ls
	I0610 10:45:44.040926   27329 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:44.045350   27329 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:44.045380   27329 status.go:422] ha-565925-m03 apiserver status = Running (err=<nil>)
	I0610 10:45:44.045392   27329 status.go:257] ha-565925-m03 status: &{Name:ha-565925-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:44.045420   27329 status.go:255] checking status of ha-565925-m04 ...
	I0610 10:45:44.045708   27329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:44.045742   27329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:44.060195   27329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41833
	I0610 10:45:44.060677   27329 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:44.061250   27329 main.go:141] libmachine: Using API Version  1
	I0610 10:45:44.061285   27329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:44.061613   27329 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:44.061810   27329 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:45:44.063638   27329 status.go:330] ha-565925-m04 host status = "Running" (err=<nil>)
	I0610 10:45:44.063665   27329 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:44.063916   27329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:44.063948   27329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:44.078305   27329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I0610 10:45:44.078738   27329 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:44.079242   27329 main.go:141] libmachine: Using API Version  1
	I0610 10:45:44.079264   27329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:44.079578   27329 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:44.079738   27329 main.go:141] libmachine: (ha-565925-m04) Calling .GetIP
	I0610 10:45:44.082380   27329 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:44.082810   27329 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:44.082839   27329 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:44.082953   27329 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:44.083233   27329 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:44.083281   27329 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:44.098768   27329 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36951
	I0610 10:45:44.099152   27329 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:44.099647   27329 main.go:141] libmachine: Using API Version  1
	I0610 10:45:44.099669   27329 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:44.099956   27329 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:44.100171   27329 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:45:44.100354   27329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:44.100373   27329 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:45:44.103160   27329 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:44.103531   27329 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:44.103549   27329 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:44.103690   27329 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:45:44.103885   27329 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:45:44.104040   27329 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:45:44.104168   27329 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	I0610 10:45:44.188288   27329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:44.201123   27329 status.go:257] ha-565925-m04 status: &{Name:ha-565925-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr: exit status 7 (646.182499ms)

                                                
                                                
-- stdout --
	ha-565925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-565925-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:45:53.407685   27433 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:45:53.407974   27433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:53.407985   27433 out.go:304] Setting ErrFile to fd 2...
	I0610 10:45:53.407990   27433 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:53.408159   27433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:45:53.408313   27433 out.go:298] Setting JSON to false
	I0610 10:45:53.408334   27433 mustload.go:65] Loading cluster: ha-565925
	I0610 10:45:53.408461   27433 notify.go:220] Checking for updates...
	I0610 10:45:53.408737   27433 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:45:53.408754   27433 status.go:255] checking status of ha-565925 ...
	I0610 10:45:53.409131   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:53.409192   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:53.427197   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37973
	I0610 10:45:53.427593   27433 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:53.428169   27433 main.go:141] libmachine: Using API Version  1
	I0610 10:45:53.428193   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:53.428520   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:53.428704   27433 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:45:53.430484   27433 status.go:330] ha-565925 host status = "Running" (err=<nil>)
	I0610 10:45:53.430499   27433 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:53.430746   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:53.430777   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:53.445193   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38359
	I0610 10:45:53.445593   27433 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:53.446017   27433 main.go:141] libmachine: Using API Version  1
	I0610 10:45:53.446038   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:53.446332   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:53.446519   27433 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:45:53.449188   27433 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:53.449566   27433 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:53.449596   27433 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:53.449728   27433 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:45:53.450012   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:53.450054   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:53.464583   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42061
	I0610 10:45:53.464918   27433 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:53.465512   27433 main.go:141] libmachine: Using API Version  1
	I0610 10:45:53.465551   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:53.465844   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:53.466033   27433 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:45:53.466231   27433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:53.466261   27433 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:45:53.469262   27433 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:53.469840   27433 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:45:53.469874   27433 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:45:53.469974   27433 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:45:53.470266   27433 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:45:53.470489   27433 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:45:53.470642   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:45:53.553232   27433 ssh_runner.go:195] Run: systemctl --version
	I0610 10:45:53.558982   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:53.575613   27433 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:53.575638   27433 api_server.go:166] Checking apiserver status ...
	I0610 10:45:53.575665   27433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:53.594629   27433 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0610 10:45:53.607643   27433 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:53.607720   27433 ssh_runner.go:195] Run: ls
	I0610 10:45:53.612781   27433 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:53.616835   27433 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:53.616854   27433 status.go:422] ha-565925 apiserver status = Running (err=<nil>)
	I0610 10:45:53.616863   27433 status.go:257] ha-565925 status: &{Name:ha-565925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:53.616880   27433 status.go:255] checking status of ha-565925-m02 ...
	I0610 10:45:53.617244   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:53.617281   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:53.637694   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I0610 10:45:53.638370   27433 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:53.639031   27433 main.go:141] libmachine: Using API Version  1
	I0610 10:45:53.639057   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:53.639445   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:53.639630   27433 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:45:53.642188   27433 status.go:330] ha-565925-m02 host status = "Stopped" (err=<nil>)
	I0610 10:45:53.642203   27433 status.go:343] host is not running, skipping remaining checks
	I0610 10:45:53.642210   27433 status.go:257] ha-565925-m02 status: &{Name:ha-565925-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:53.642229   27433 status.go:255] checking status of ha-565925-m03 ...
	I0610 10:45:53.642559   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:53.642603   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:53.659381   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43349
	I0610 10:45:53.659832   27433 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:53.660337   27433 main.go:141] libmachine: Using API Version  1
	I0610 10:45:53.660365   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:53.660675   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:53.660870   27433 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:45:53.663263   27433 status.go:330] ha-565925-m03 host status = "Running" (err=<nil>)
	I0610 10:45:53.663280   27433 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:53.663648   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:53.663697   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:53.680008   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33917
	I0610 10:45:53.680560   27433 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:53.681126   27433 main.go:141] libmachine: Using API Version  1
	I0610 10:45:53.681153   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:53.681505   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:53.681667   27433 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:45:53.684639   27433 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:53.685252   27433 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:53.685278   27433 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:53.685534   27433 host.go:66] Checking if "ha-565925-m03" exists ...
	I0610 10:45:53.685844   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:53.685890   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:53.706973   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0610 10:45:53.707518   27433 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:53.708056   27433 main.go:141] libmachine: Using API Version  1
	I0610 10:45:53.708088   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:53.708487   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:53.708809   27433 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:45:53.709059   27433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:53.709085   27433 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:45:53.712173   27433 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:53.712698   27433 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:53.712714   27433 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:53.712908   27433 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:45:53.713144   27433 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:45:53.713277   27433 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:45:53.713415   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:45:53.802236   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:53.820489   27433 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:45:53.820518   27433 api_server.go:166] Checking apiserver status ...
	I0610 10:45:53.820557   27433 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:45:53.835573   27433 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup
	W0610 10:45:53.845621   27433 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1508/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:45:53.845679   27433 ssh_runner.go:195] Run: ls
	I0610 10:45:53.850453   27433 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:45:53.855025   27433 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:45:53.855048   27433 status.go:422] ha-565925-m03 apiserver status = Running (err=<nil>)
	I0610 10:45:53.855058   27433 status.go:257] ha-565925-m03 status: &{Name:ha-565925-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:45:53.855079   27433 status.go:255] checking status of ha-565925-m04 ...
	I0610 10:45:53.855356   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:53.855398   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:53.871109   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
	I0610 10:45:53.871530   27433 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:53.871937   27433 main.go:141] libmachine: Using API Version  1
	I0610 10:45:53.871961   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:53.872239   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:53.872426   27433 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:45:53.873747   27433 status.go:330] ha-565925-m04 host status = "Running" (err=<nil>)
	I0610 10:45:53.873766   27433 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:53.874106   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:53.874160   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:53.889143   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0610 10:45:53.889524   27433 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:53.889978   27433 main.go:141] libmachine: Using API Version  1
	I0610 10:45:53.889999   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:53.890304   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:53.890476   27433 main.go:141] libmachine: (ha-565925-m04) Calling .GetIP
	I0610 10:45:53.893167   27433 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:53.893574   27433 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:53.893600   27433 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:53.893787   27433 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:45:53.894089   27433 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:53.894128   27433 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:53.909410   27433 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I0610 10:45:53.909754   27433 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:53.910202   27433 main.go:141] libmachine: Using API Version  1
	I0610 10:45:53.910219   27433 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:53.910505   27433 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:53.910669   27433 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:45:53.910828   27433 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:45:53.910847   27433 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:45:53.913494   27433 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:53.913896   27433 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:53.913918   27433 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:53.914079   27433 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:45:53.914236   27433 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:45:53.914365   27433 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:45:53.914474   27433 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	I0610 10:45:53.995958   27433 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:45:54.010324   27433 status.go:257] ha-565925-m04 status: &{Name:ha-565925-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565925 -n ha-565925
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565925 logs -n 25: (1.416391292s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925:/home/docker/cp-test_ha-565925-m03_ha-565925.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925 sudo cat                                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m02:/home/docker/cp-test_ha-565925-m03_ha-565925-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m02 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04:/home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m04 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp testdata/cp-test.txt                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1107448961/001/cp-test_ha-565925-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925:/home/docker/cp-test_ha-565925-m04_ha-565925.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925 sudo cat                                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m02:/home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m02 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03:/home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m03 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565925 node stop m02 -v=7                                                     | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-565925 node start m02 -v=7                                                    | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:37:51
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:37:51.251761   21811 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:37:51.251853   21811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:37:51.251861   21811 out.go:304] Setting ErrFile to fd 2...
	I0610 10:37:51.251864   21811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:37:51.252062   21811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:37:51.252626   21811 out.go:298] Setting JSON to false
	I0610 10:37:51.253501   21811 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1212,"bootTime":1718014659,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:37:51.253561   21811 start.go:139] virtualization: kvm guest
	I0610 10:37:51.255741   21811 out.go:177] * [ha-565925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:37:51.257390   21811 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:37:51.257350   21811 notify.go:220] Checking for updates...
	I0610 10:37:51.258943   21811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:37:51.260269   21811 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:37:51.261624   21811 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:37:51.262918   21811 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 10:37:51.264223   21811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:37:51.265681   21811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:37:51.300203   21811 out.go:177] * Using the kvm2 driver based on user configuration
	I0610 10:37:51.301562   21811 start.go:297] selected driver: kvm2
	I0610 10:37:51.301578   21811 start.go:901] validating driver "kvm2" against <nil>
	I0610 10:37:51.301589   21811 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:37:51.302304   21811 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:37:51.302383   21811 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 10:37:51.317065   21811 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 10:37:51.317112   21811 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 10:37:51.317313   21811 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:37:51.317338   21811 cni.go:84] Creating CNI manager for ""
	I0610 10:37:51.317345   21811 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0610 10:37:51.317350   21811 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 10:37:51.317429   21811 start.go:340] cluster config:
	{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0610 10:37:51.317515   21811 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:37:51.319454   21811 out.go:177] * Starting "ha-565925" primary control-plane node in "ha-565925" cluster
	I0610 10:37:51.320880   21811 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:37:51.321040   21811 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 10:37:51.321071   21811 cache.go:56] Caching tarball of preloaded images
	I0610 10:37:51.321222   21811 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:37:51.321232   21811 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:37:51.322248   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:37:51.322286   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json: {Name:mk7c15934ae50915ca2e8e0e876fe86b3ff227de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:37:51.322436   21811 start.go:360] acquireMachinesLock for ha-565925: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:37:51.322465   21811 start.go:364] duration metric: took 15.95µs to acquireMachinesLock for "ha-565925"
	I0610 10:37:51.322481   21811 start.go:93] Provisioning new machine with config: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:37:51.322539   21811 start.go:125] createHost starting for "" (driver="kvm2")
	I0610 10:37:51.324589   21811 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:37:51.324708   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:37:51.324743   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:37:51.338690   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0610 10:37:51.339171   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:37:51.339791   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:37:51.339820   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:37:51.340154   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:37:51.340348   21811 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:37:51.340485   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:37:51.340650   21811 start.go:159] libmachine.API.Create for "ha-565925" (driver="kvm2")
	I0610 10:37:51.340679   21811 client.go:168] LocalClient.Create starting
	I0610 10:37:51.340707   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 10:37:51.340737   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:37:51.340753   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:37:51.340805   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 10:37:51.340830   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:37:51.340849   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:37:51.340874   21811 main.go:141] libmachine: Running pre-create checks...
	I0610 10:37:51.340886   21811 main.go:141] libmachine: (ha-565925) Calling .PreCreateCheck
	I0610 10:37:51.341201   21811 main.go:141] libmachine: (ha-565925) Calling .GetConfigRaw
	I0610 10:37:51.341623   21811 main.go:141] libmachine: Creating machine...
	I0610 10:37:51.341642   21811 main.go:141] libmachine: (ha-565925) Calling .Create
	I0610 10:37:51.341760   21811 main.go:141] libmachine: (ha-565925) Creating KVM machine...
	I0610 10:37:51.343096   21811 main.go:141] libmachine: (ha-565925) DBG | found existing default KVM network
	I0610 10:37:51.343904   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:51.343750   21834 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0610 10:37:51.343931   21811 main.go:141] libmachine: (ha-565925) DBG | created network xml: 
	I0610 10:37:51.343939   21811 main.go:141] libmachine: (ha-565925) DBG | <network>
	I0610 10:37:51.343945   21811 main.go:141] libmachine: (ha-565925) DBG |   <name>mk-ha-565925</name>
	I0610 10:37:51.343949   21811 main.go:141] libmachine: (ha-565925) DBG |   <dns enable='no'/>
	I0610 10:37:51.343955   21811 main.go:141] libmachine: (ha-565925) DBG |   
	I0610 10:37:51.343961   21811 main.go:141] libmachine: (ha-565925) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0610 10:37:51.343970   21811 main.go:141] libmachine: (ha-565925) DBG |     <dhcp>
	I0610 10:37:51.343976   21811 main.go:141] libmachine: (ha-565925) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0610 10:37:51.343984   21811 main.go:141] libmachine: (ha-565925) DBG |     </dhcp>
	I0610 10:37:51.343991   21811 main.go:141] libmachine: (ha-565925) DBG |   </ip>
	I0610 10:37:51.343999   21811 main.go:141] libmachine: (ha-565925) DBG |   
	I0610 10:37:51.344003   21811 main.go:141] libmachine: (ha-565925) DBG | </network>
	I0610 10:37:51.344012   21811 main.go:141] libmachine: (ha-565925) DBG | 
	I0610 10:37:51.349106   21811 main.go:141] libmachine: (ha-565925) DBG | trying to create private KVM network mk-ha-565925 192.168.39.0/24...
	I0610 10:37:51.417135   21811 main.go:141] libmachine: (ha-565925) DBG | private KVM network mk-ha-565925 192.168.39.0/24 created
	I0610 10:37:51.417176   21811 main.go:141] libmachine: (ha-565925) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925 ...
	I0610 10:37:51.417190   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:51.417100   21834 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:37:51.417208   21811 main.go:141] libmachine: (ha-565925) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 10:37:51.417255   21811 main.go:141] libmachine: (ha-565925) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 10:37:51.649309   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:51.649194   21834 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa...
	I0610 10:37:51.811611   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:51.811494   21834 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/ha-565925.rawdisk...
	I0610 10:37:51.811644   21811 main.go:141] libmachine: (ha-565925) DBG | Writing magic tar header
	I0610 10:37:51.811653   21811 main.go:141] libmachine: (ha-565925) DBG | Writing SSH key tar header
	I0610 10:37:51.811660   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:51.811622   21834 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925 ...
	I0610 10:37:51.811814   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925 (perms=drwx------)
	I0610 10:37:51.811851   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925
	I0610 10:37:51.811864   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 10:37:51.811881   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 10:37:51.811894   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 10:37:51.811905   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 10:37:51.811918   21811 main.go:141] libmachine: (ha-565925) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 10:37:51.811937   21811 main.go:141] libmachine: (ha-565925) Creating domain...
	I0610 10:37:51.811955   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 10:37:51.811969   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:37:51.811978   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 10:37:51.812004   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 10:37:51.812019   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home/jenkins
	I0610 10:37:51.812028   21811 main.go:141] libmachine: (ha-565925) DBG | Checking permissions on dir: /home
	I0610 10:37:51.812038   21811 main.go:141] libmachine: (ha-565925) DBG | Skipping /home - not owner
	I0610 10:37:51.812917   21811 main.go:141] libmachine: (ha-565925) define libvirt domain using xml: 
	I0610 10:37:51.812937   21811 main.go:141] libmachine: (ha-565925) <domain type='kvm'>
	I0610 10:37:51.812965   21811 main.go:141] libmachine: (ha-565925)   <name>ha-565925</name>
	I0610 10:37:51.812977   21811 main.go:141] libmachine: (ha-565925)   <memory unit='MiB'>2200</memory>
	I0610 10:37:51.812985   21811 main.go:141] libmachine: (ha-565925)   <vcpu>2</vcpu>
	I0610 10:37:51.812991   21811 main.go:141] libmachine: (ha-565925)   <features>
	I0610 10:37:51.812999   21811 main.go:141] libmachine: (ha-565925)     <acpi/>
	I0610 10:37:51.813005   21811 main.go:141] libmachine: (ha-565925)     <apic/>
	I0610 10:37:51.813014   21811 main.go:141] libmachine: (ha-565925)     <pae/>
	I0610 10:37:51.813026   21811 main.go:141] libmachine: (ha-565925)     
	I0610 10:37:51.813038   21811 main.go:141] libmachine: (ha-565925)   </features>
	I0610 10:37:51.813045   21811 main.go:141] libmachine: (ha-565925)   <cpu mode='host-passthrough'>
	I0610 10:37:51.813053   21811 main.go:141] libmachine: (ha-565925)   
	I0610 10:37:51.813060   21811 main.go:141] libmachine: (ha-565925)   </cpu>
	I0610 10:37:51.813072   21811 main.go:141] libmachine: (ha-565925)   <os>
	I0610 10:37:51.813080   21811 main.go:141] libmachine: (ha-565925)     <type>hvm</type>
	I0610 10:37:51.813093   21811 main.go:141] libmachine: (ha-565925)     <boot dev='cdrom'/>
	I0610 10:37:51.813103   21811 main.go:141] libmachine: (ha-565925)     <boot dev='hd'/>
	I0610 10:37:51.813114   21811 main.go:141] libmachine: (ha-565925)     <bootmenu enable='no'/>
	I0610 10:37:51.813127   21811 main.go:141] libmachine: (ha-565925)   </os>
	I0610 10:37:51.813138   21811 main.go:141] libmachine: (ha-565925)   <devices>
	I0610 10:37:51.813147   21811 main.go:141] libmachine: (ha-565925)     <disk type='file' device='cdrom'>
	I0610 10:37:51.813165   21811 main.go:141] libmachine: (ha-565925)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/boot2docker.iso'/>
	I0610 10:37:51.813177   21811 main.go:141] libmachine: (ha-565925)       <target dev='hdc' bus='scsi'/>
	I0610 10:37:51.813189   21811 main.go:141] libmachine: (ha-565925)       <readonly/>
	I0610 10:37:51.813210   21811 main.go:141] libmachine: (ha-565925)     </disk>
	I0610 10:37:51.813224   21811 main.go:141] libmachine: (ha-565925)     <disk type='file' device='disk'>
	I0610 10:37:51.813237   21811 main.go:141] libmachine: (ha-565925)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 10:37:51.813254   21811 main.go:141] libmachine: (ha-565925)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/ha-565925.rawdisk'/>
	I0610 10:37:51.813265   21811 main.go:141] libmachine: (ha-565925)       <target dev='hda' bus='virtio'/>
	I0610 10:37:51.813277   21811 main.go:141] libmachine: (ha-565925)     </disk>
	I0610 10:37:51.813318   21811 main.go:141] libmachine: (ha-565925)     <interface type='network'>
	I0610 10:37:51.813342   21811 main.go:141] libmachine: (ha-565925)       <source network='mk-ha-565925'/>
	I0610 10:37:51.813353   21811 main.go:141] libmachine: (ha-565925)       <model type='virtio'/>
	I0610 10:37:51.813367   21811 main.go:141] libmachine: (ha-565925)     </interface>
	I0610 10:37:51.813380   21811 main.go:141] libmachine: (ha-565925)     <interface type='network'>
	I0610 10:37:51.813391   21811 main.go:141] libmachine: (ha-565925)       <source network='default'/>
	I0610 10:37:51.813402   21811 main.go:141] libmachine: (ha-565925)       <model type='virtio'/>
	I0610 10:37:51.813411   21811 main.go:141] libmachine: (ha-565925)     </interface>
	I0610 10:37:51.813424   21811 main.go:141] libmachine: (ha-565925)     <serial type='pty'>
	I0610 10:37:51.813437   21811 main.go:141] libmachine: (ha-565925)       <target port='0'/>
	I0610 10:37:51.813451   21811 main.go:141] libmachine: (ha-565925)     </serial>
	I0610 10:37:51.813460   21811 main.go:141] libmachine: (ha-565925)     <console type='pty'>
	I0610 10:37:51.813469   21811 main.go:141] libmachine: (ha-565925)       <target type='serial' port='0'/>
	I0610 10:37:51.813491   21811 main.go:141] libmachine: (ha-565925)     </console>
	I0610 10:37:51.813504   21811 main.go:141] libmachine: (ha-565925)     <rng model='virtio'>
	I0610 10:37:51.813519   21811 main.go:141] libmachine: (ha-565925)       <backend model='random'>/dev/random</backend>
	I0610 10:37:51.813532   21811 main.go:141] libmachine: (ha-565925)     </rng>
	I0610 10:37:51.813541   21811 main.go:141] libmachine: (ha-565925)     
	I0610 10:37:51.813551   21811 main.go:141] libmachine: (ha-565925)     
	I0610 10:37:51.813561   21811 main.go:141] libmachine: (ha-565925)   </devices>
	I0610 10:37:51.813572   21811 main.go:141] libmachine: (ha-565925) </domain>
	I0610 10:37:51.813581   21811 main.go:141] libmachine: (ha-565925) 
	I0610 10:37:51.817903   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:6a:77:ed in network default
	I0610 10:37:51.818489   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:51.818505   21811 main.go:141] libmachine: (ha-565925) Ensuring networks are active...
	I0610 10:37:51.819304   21811 main.go:141] libmachine: (ha-565925) Ensuring network default is active
	I0610 10:37:51.819598   21811 main.go:141] libmachine: (ha-565925) Ensuring network mk-ha-565925 is active
	I0610 10:37:51.820102   21811 main.go:141] libmachine: (ha-565925) Getting domain xml...
	I0610 10:37:51.820750   21811 main.go:141] libmachine: (ha-565925) Creating domain...
	I0610 10:37:53.008336   21811 main.go:141] libmachine: (ha-565925) Waiting to get IP...
	I0610 10:37:53.009359   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:53.009768   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:53.009802   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:53.009746   21834 retry.go:31] will retry after 246.064928ms: waiting for machine to come up
	I0610 10:37:53.257305   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:53.257789   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:53.257812   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:53.257724   21834 retry.go:31] will retry after 383.734399ms: waiting for machine to come up
	I0610 10:37:53.642985   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:53.643440   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:53.643486   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:53.643424   21834 retry.go:31] will retry after 335.386365ms: waiting for machine to come up
	I0610 10:37:53.979774   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:53.980152   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:53.980179   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:53.980114   21834 retry.go:31] will retry after 534.492321ms: waiting for machine to come up
	I0610 10:37:54.515753   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:54.516152   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:54.516183   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:54.516103   21834 retry.go:31] will retry after 497.370783ms: waiting for machine to come up
	I0610 10:37:55.014704   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:55.015039   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:55.015060   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:55.014999   21834 retry.go:31] will retry after 838.175864ms: waiting for machine to come up
	I0610 10:37:55.854337   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:55.854724   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:55.854754   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:55.854678   21834 retry.go:31] will retry after 801.114412ms: waiting for machine to come up
	I0610 10:37:56.657501   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:56.657898   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:56.657929   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:56.657844   21834 retry.go:31] will retry after 1.228462609s: waiting for machine to come up
	I0610 10:37:57.888227   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:57.888543   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:57.888566   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:57.888493   21834 retry.go:31] will retry after 1.223943325s: waiting for machine to come up
	I0610 10:37:59.113957   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:37:59.114450   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:37:59.114472   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:37:59.114403   21834 retry.go:31] will retry after 1.888368081s: waiting for machine to come up
	I0610 10:38:01.005452   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:01.005881   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:38:01.005908   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:38:01.005831   21834 retry.go:31] will retry after 2.682748595s: waiting for machine to come up
	I0610 10:38:03.691612   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:03.692037   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:38:03.692063   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:38:03.691999   21834 retry.go:31] will retry after 2.798658731s: waiting for machine to come up
	I0610 10:38:06.492418   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:06.492883   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find current IP address of domain ha-565925 in network mk-ha-565925
	I0610 10:38:06.492915   21811 main.go:141] libmachine: (ha-565925) DBG | I0610 10:38:06.492834   21834 retry.go:31] will retry after 3.670059356s: waiting for machine to come up
	I0610 10:38:10.164011   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:10.164464   21811 main.go:141] libmachine: (ha-565925) Found IP for machine: 192.168.39.208
	I0610 10:38:10.164484   21811 main.go:141] libmachine: (ha-565925) Reserving static IP address...
	I0610 10:38:10.164498   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has current primary IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:10.164790   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find host DHCP lease matching {name: "ha-565925", mac: "52:54:00:d3:d6:ef", ip: "192.168.39.208"} in network mk-ha-565925
	I0610 10:38:10.233619   21811 main.go:141] libmachine: (ha-565925) DBG | Getting to WaitForSSH function...
	I0610 10:38:10.233648   21811 main.go:141] libmachine: (ha-565925) Reserved static IP address: 192.168.39.208
	I0610 10:38:10.233662   21811 main.go:141] libmachine: (ha-565925) Waiting for SSH to be available...
	I0610 10:38:10.236307   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:10.236581   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925
	I0610 10:38:10.236605   21811 main.go:141] libmachine: (ha-565925) DBG | unable to find defined IP address of network mk-ha-565925 interface with MAC address 52:54:00:d3:d6:ef
	I0610 10:38:10.236729   21811 main.go:141] libmachine: (ha-565925) DBG | Using SSH client type: external
	I0610 10:38:10.236758   21811 main.go:141] libmachine: (ha-565925) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa (-rw-------)
	I0610 10:38:10.236797   21811 main.go:141] libmachine: (ha-565925) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:38:10.236816   21811 main.go:141] libmachine: (ha-565925) DBG | About to run SSH command:
	I0610 10:38:10.236833   21811 main.go:141] libmachine: (ha-565925) DBG | exit 0
	I0610 10:38:10.240364   21811 main.go:141] libmachine: (ha-565925) DBG | SSH cmd err, output: exit status 255: 
	I0610 10:38:10.240389   21811 main.go:141] libmachine: (ha-565925) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0610 10:38:10.240402   21811 main.go:141] libmachine: (ha-565925) DBG | command : exit 0
	I0610 10:38:10.240409   21811 main.go:141] libmachine: (ha-565925) DBG | err     : exit status 255
	I0610 10:38:10.240418   21811 main.go:141] libmachine: (ha-565925) DBG | output  : 
	I0610 10:38:13.241461   21811 main.go:141] libmachine: (ha-565925) DBG | Getting to WaitForSSH function...
	I0610 10:38:13.244539   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.244924   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.244979   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.245122   21811 main.go:141] libmachine: (ha-565925) DBG | Using SSH client type: external
	I0610 10:38:13.245148   21811 main.go:141] libmachine: (ha-565925) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa (-rw-------)
	I0610 10:38:13.245205   21811 main.go:141] libmachine: (ha-565925) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:38:13.245226   21811 main.go:141] libmachine: (ha-565925) DBG | About to run SSH command:
	I0610 10:38:13.245247   21811 main.go:141] libmachine: (ha-565925) DBG | exit 0
	I0610 10:38:13.372605   21811 main.go:141] libmachine: (ha-565925) DBG | SSH cmd err, output: <nil>: 
	I0610 10:38:13.372854   21811 main.go:141] libmachine: (ha-565925) KVM machine creation complete!
	I0610 10:38:13.373161   21811 main.go:141] libmachine: (ha-565925) Calling .GetConfigRaw
	I0610 10:38:13.373727   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:13.373891   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:13.374083   21811 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 10:38:13.374101   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:38:13.375305   21811 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 10:38:13.375320   21811 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 10:38:13.375326   21811 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 10:38:13.375332   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:13.377839   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.378205   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.378238   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.378323   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:13.378511   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.378691   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.378889   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:13.379122   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:13.379352   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:13.379364   21811 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 10:38:13.488188   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:38:13.488221   21811 main.go:141] libmachine: Detecting the provisioner...
	I0610 10:38:13.488235   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:13.490919   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.491303   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.491328   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.491520   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:13.491692   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.491853   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.491947   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:13.492073   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:13.492224   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:13.492240   21811 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 10:38:13.601278   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 10:38:13.601344   21811 main.go:141] libmachine: found compatible host: buildroot
	I0610 10:38:13.601350   21811 main.go:141] libmachine: Provisioning with buildroot...
	I0610 10:38:13.601358   21811 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:38:13.601582   21811 buildroot.go:166] provisioning hostname "ha-565925"
	I0610 10:38:13.601602   21811 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:38:13.601751   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:13.604134   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.604396   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.604425   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.604563   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:13.604755   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.604937   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.605076   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:13.605235   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:13.605439   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:13.605455   21811 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925 && echo "ha-565925" | sudo tee /etc/hostname
	I0610 10:38:13.726337   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:38:13.726370   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:13.729270   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.729605   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.729634   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.729783   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:13.729962   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.730124   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:13.730279   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:13.730441   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:13.730606   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:13.730621   21811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:38:13.849953   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:38:13.849994   21811 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:38:13.850014   21811 buildroot.go:174] setting up certificates
	I0610 10:38:13.850025   21811 provision.go:84] configureAuth start
	I0610 10:38:13.850033   21811 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:38:13.850358   21811 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:38:13.853076   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.853447   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.853488   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.853577   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:13.855383   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.855633   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:13.855662   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:13.855748   21811 provision.go:143] copyHostCerts
	I0610 10:38:13.855797   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:38:13.855864   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 10:38:13.855878   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:38:13.855979   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:38:13.856105   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:38:13.856136   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 10:38:13.856147   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:38:13.856201   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:38:13.856273   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:38:13.856301   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 10:38:13.856312   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:38:13.856362   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:38:13.856449   21811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925 san=[127.0.0.1 192.168.39.208 ha-565925 localhost minikube]
	I0610 10:38:14.027814   21811 provision.go:177] copyRemoteCerts
	I0610 10:38:14.027896   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:38:14.027925   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.030316   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.030609   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.030639   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.030782   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.031038   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.031212   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.031342   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:14.114541   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 10:38:14.114600   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:38:14.137229   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 10:38:14.137297   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0610 10:38:14.159266   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 10:38:14.159335   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 10:38:14.181114   21811 provision.go:87] duration metric: took 331.078282ms to configureAuth
	I0610 10:38:14.181140   21811 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:38:14.181300   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:38:14.181368   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.183658   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.183974   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.183994   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.184189   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.184355   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.184466   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.184620   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.184806   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:14.184983   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:14.185005   21811 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:38:14.448439   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:38:14.448466   21811 main.go:141] libmachine: Checking connection to Docker...
	I0610 10:38:14.448474   21811 main.go:141] libmachine: (ha-565925) Calling .GetURL
	I0610 10:38:14.449817   21811 main.go:141] libmachine: (ha-565925) DBG | Using libvirt version 6000000
	I0610 10:38:14.451654   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.451966   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.452025   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.452184   21811 main.go:141] libmachine: Docker is up and running!
	I0610 10:38:14.452230   21811 main.go:141] libmachine: Reticulating splines...
	I0610 10:38:14.452247   21811 client.go:171] duration metric: took 23.111560156s to LocalClient.Create
	I0610 10:38:14.452273   21811 start.go:167] duration metric: took 23.111624599s to libmachine.API.Create "ha-565925"
	I0610 10:38:14.452284   21811 start.go:293] postStartSetup for "ha-565925" (driver="kvm2")
	I0610 10:38:14.452293   21811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:38:14.452309   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:14.452542   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:38:14.452567   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.454560   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.454799   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.454824   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.455008   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.455188   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.455367   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.455512   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:14.538840   21811 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:38:14.542806   21811 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:38:14.542832   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:38:14.542908   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:38:14.542996   21811 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 10:38:14.543006   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 10:38:14.543099   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 10:38:14.551864   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:38:14.573983   21811 start.go:296] duration metric: took 121.686642ms for postStartSetup
	I0610 10:38:14.574041   21811 main.go:141] libmachine: (ha-565925) Calling .GetConfigRaw
	I0610 10:38:14.574626   21811 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:38:14.577198   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.577656   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.577688   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.577907   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:38:14.578137   21811 start.go:128] duration metric: took 23.255589829s to createHost
	I0610 10:38:14.578159   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.580518   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.580885   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.580913   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.581025   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.581214   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.581374   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.581521   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.581670   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:38:14.581822   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:38:14.581832   21811 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:38:14.693318   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718015894.671688461
	
	I0610 10:38:14.693338   21811 fix.go:216] guest clock: 1718015894.671688461
	I0610 10:38:14.693345   21811 fix.go:229] Guest: 2024-06-10 10:38:14.671688461 +0000 UTC Remote: 2024-06-10 10:38:14.578150112 +0000 UTC m=+23.364236686 (delta=93.538349ms)
	I0610 10:38:14.693363   21811 fix.go:200] guest clock delta is within tolerance: 93.538349ms
	I0610 10:38:14.693368   21811 start.go:83] releasing machines lock for "ha-565925", held for 23.370894383s
	I0610 10:38:14.693384   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:14.693618   21811 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:38:14.695981   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.696299   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.696326   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.696441   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:14.696879   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:14.697099   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:14.697159   21811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:38:14.697204   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.697290   21811 ssh_runner.go:195] Run: cat /version.json
	I0610 10:38:14.697314   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:14.699825   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.699991   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.700212   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.700248   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.700321   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:14.700347   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:14.700356   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.700545   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:14.700576   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.700755   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.700767   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:14.700944   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:14.701039   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:14.701222   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:14.810876   21811 ssh_runner.go:195] Run: systemctl --version
	I0610 10:38:14.816440   21811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:38:14.973102   21811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:38:14.979604   21811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:38:14.979679   21811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:38:14.996243   21811 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 10:38:14.996269   21811 start.go:494] detecting cgroup driver to use...
	I0610 10:38:14.996336   21811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:38:15.014214   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:38:15.028552   21811 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:38:15.028604   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:38:15.042309   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:38:15.056424   21811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:38:15.182913   21811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:38:15.341472   21811 docker.go:233] disabling docker service ...
	I0610 10:38:15.341527   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:38:15.354612   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:38:15.366720   21811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:38:15.477585   21811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:38:15.594707   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:38:15.614378   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:38:15.631233   21811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:38:15.631290   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.641266   21811 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:38:15.641329   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.650895   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.660550   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.669822   21811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:38:15.679392   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.688594   21811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.704405   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:38:15.713975   21811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:38:15.722631   21811 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 10:38:15.722682   21811 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 10:38:15.734616   21811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:38:15.743367   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:38:15.853208   21811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:38:15.982454   21811 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:38:15.982525   21811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:38:15.987288   21811 start.go:562] Will wait 60s for crictl version
	I0610 10:38:15.987338   21811 ssh_runner.go:195] Run: which crictl
	I0610 10:38:15.991081   21811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:38:16.030890   21811 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:38:16.030953   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:38:16.060156   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:38:16.089799   21811 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:38:16.091090   21811 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:38:16.093471   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:16.093810   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:16.093840   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:16.093985   21811 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:38:16.097994   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:38:16.110114   21811 kubeadm.go:877] updating cluster {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 10:38:16.110207   21811 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:38:16.110254   21811 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:38:16.140789   21811 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0610 10:38:16.140872   21811 ssh_runner.go:195] Run: which lz4
	I0610 10:38:16.144426   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0610 10:38:16.144517   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 10:38:16.148171   21811 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 10:38:16.148196   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0610 10:38:17.352785   21811 crio.go:462] duration metric: took 1.208292318s to copy over tarball
	I0610 10:38:17.352869   21811 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 10:38:19.419050   21811 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.066150111s)
	I0610 10:38:19.419081   21811 crio.go:469] duration metric: took 2.066261747s to extract the tarball
	I0610 10:38:19.419091   21811 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 10:38:19.454990   21811 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:38:19.495814   21811 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:38:19.495839   21811 cache_images.go:84] Images are preloaded, skipping loading
	I0610 10:38:19.495846   21811 kubeadm.go:928] updating node { 192.168.39.208 8443 v1.30.1 crio true true} ...
	I0610 10:38:19.495969   21811 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:38:19.496037   21811 ssh_runner.go:195] Run: crio config
	I0610 10:38:19.541166   21811 cni.go:84] Creating CNI manager for ""
	I0610 10:38:19.541184   21811 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 10:38:19.541195   21811 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 10:38:19.541221   21811 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565925 NodeName:ha-565925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 10:38:19.541363   21811 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 10:38:19.541389   21811 kube-vip.go:115] generating kube-vip config ...
	I0610 10:38:19.541443   21811 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 10:38:19.557806   21811 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 10:38:19.557908   21811 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0610 10:38:19.557970   21811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:38:19.567350   21811 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 10:38:19.567431   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0610 10:38:19.576067   21811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0610 10:38:19.591463   21811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:38:19.606260   21811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0610 10:38:19.621162   21811 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0610 10:38:19.635702   21811 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 10:38:19.639242   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:38:19.649613   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:38:19.769768   21811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:38:19.786120   21811 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.208
	I0610 10:38:19.786143   21811 certs.go:194] generating shared ca certs ...
	I0610 10:38:19.786171   21811 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:19.786337   21811 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:38:19.786388   21811 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:38:19.786402   21811 certs.go:256] generating profile certs ...
	I0610 10:38:19.786462   21811 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 10:38:19.786481   21811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt with IP's: []
	I0610 10:38:20.019840   21811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt ...
	I0610 10:38:20.019874   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt: {Name:mk9042445f0af50cdbaf88bd29191a507127a8bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.020068   21811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key ...
	I0610 10:38:20.020079   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key: {Name:mkccce487881b7a4f98e7bb9c1f61d8a01ffb313 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.020153   21811 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8612117b
	I0610 10:38:20.020168   21811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8612117b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.254]
	I0610 10:38:20.081806   21811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8612117b ...
	I0610 10:38:20.081837   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8612117b: {Name:mk0a55eb47942ca3b243d80b3f5f5590fb9a2fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.082000   21811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8612117b ...
	I0610 10:38:20.082015   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8612117b: {Name:mk5ae810d9d01af4bd4d963e64d1d55d2546edb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.082084   21811 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8612117b -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 10:38:20.082174   21811 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8612117b -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 10:38:20.082227   21811 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 10:38:20.082242   21811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt with IP's: []
	I0610 10:38:20.205365   21811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt ...
	I0610 10:38:20.205392   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt: {Name:mk7fbc7bf6d3d63bd22e3a09e4c6daba5500426b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.205538   21811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key ...
	I0610 10:38:20.205548   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key: {Name:mke59d4711702f0251bbe2e2eacb7af45b126045 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:20.205607   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:38:20.205624   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:38:20.205634   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:38:20.205647   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:38:20.205660   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:38:20.205672   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:38:20.205684   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:38:20.205696   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:38:20.205741   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 10:38:20.205773   21811 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 10:38:20.205782   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:38:20.205802   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:38:20.205824   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:38:20.205849   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:38:20.205882   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:38:20.205910   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:38:20.205929   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 10:38:20.205942   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 10:38:20.206417   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:38:20.234398   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:38:20.259501   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:38:20.284642   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:38:20.309551   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 10:38:20.333927   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 10:38:20.358570   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:38:20.382499   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 10:38:20.405619   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:38:20.427023   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 10:38:20.448478   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 10:38:20.469859   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 10:38:20.485059   21811 ssh_runner.go:195] Run: openssl version
	I0610 10:38:20.490511   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 10:38:20.500073   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 10:38:20.503921   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 10:38:20.503963   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 10:38:20.509351   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 10:38:20.518750   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 10:38:20.529846   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 10:38:20.533980   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 10:38:20.534043   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 10:38:20.539285   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:38:20.552015   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:38:20.563200   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:38:20.569833   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:38:20.569905   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:38:20.580147   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:38:20.595653   21811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:38:20.600484   21811 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 10:38:20.600546   21811 kubeadm.go:391] StartCluster: {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:38:20.600638   21811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 10:38:20.600697   21811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 10:38:20.644847   21811 cri.go:89] found id: ""
	I0610 10:38:20.644930   21811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 10:38:20.656257   21811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 10:38:20.666711   21811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 10:38:20.676925   21811 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 10:38:20.676953   21811 kubeadm.go:156] found existing configuration files:
	
	I0610 10:38:20.677004   21811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 10:38:20.686681   21811 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 10:38:20.686733   21811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 10:38:20.696625   21811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 10:38:20.706415   21811 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 10:38:20.706466   21811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 10:38:20.716555   21811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 10:38:20.726695   21811 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 10:38:20.726754   21811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 10:38:20.736817   21811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 10:38:20.746438   21811 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 10:38:20.746495   21811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 10:38:20.756594   21811 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 10:38:20.856527   21811 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 10:38:20.856579   21811 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 10:38:20.979552   21811 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 10:38:20.979706   21811 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 10:38:20.979841   21811 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 10:38:21.169803   21811 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 10:38:21.172856   21811 out.go:204]   - Generating certificates and keys ...
	I0610 10:38:21.172975   21811 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 10:38:21.173075   21811 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 10:38:21.563053   21811 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 10:38:21.645799   21811 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 10:38:21.851856   21811 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 10:38:22.064223   21811 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 10:38:22.132741   21811 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 10:38:22.133044   21811 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-565925 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I0610 10:38:22.187292   21811 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 10:38:22.187483   21811 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-565925 localhost] and IPs [192.168.39.208 127.0.0.1 ::1]
	I0610 10:38:22.422331   21811 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 10:38:22.564015   21811 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 10:38:22.722893   21811 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 10:38:22.722990   21811 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 10:38:22.790310   21811 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 10:38:22.917415   21811 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 10:38:22.965414   21811 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 10:38:23.140970   21811 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 10:38:23.265276   21811 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 10:38:23.265901   21811 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 10:38:23.268756   21811 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 10:38:23.270635   21811 out.go:204]   - Booting up control plane ...
	I0610 10:38:23.270769   21811 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 10:38:23.270879   21811 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 10:38:23.270988   21811 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 10:38:23.289805   21811 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 10:38:23.289926   21811 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 10:38:23.290000   21811 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 10:38:23.421127   21811 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 10:38:23.421256   21811 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 10:38:24.422472   21811 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002005249s
	I0610 10:38:24.422564   21811 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 10:38:30.060319   21811 kubeadm.go:309] [api-check] The API server is healthy after 5.640390704s
	I0610 10:38:30.081713   21811 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 10:38:30.102352   21811 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 10:38:30.137788   21811 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 10:38:30.137966   21811 kubeadm.go:309] [mark-control-plane] Marking the node ha-565925 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 10:38:30.151475   21811 kubeadm.go:309] [bootstrap-token] Using token: e9zf9o.slxtdaq0q60d023m
	I0610 10:38:30.153090   21811 out.go:204]   - Configuring RBAC rules ...
	I0610 10:38:30.153209   21811 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 10:38:30.159480   21811 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 10:38:30.170946   21811 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 10:38:30.174436   21811 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 10:38:30.178756   21811 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 10:38:30.182584   21811 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 10:38:30.476495   21811 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 10:38:30.911755   21811 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 10:38:31.477200   21811 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 10:38:31.477222   21811 kubeadm.go:309] 
	I0610 10:38:31.477294   21811 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 10:38:31.477309   21811 kubeadm.go:309] 
	I0610 10:38:31.477393   21811 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 10:38:31.477401   21811 kubeadm.go:309] 
	I0610 10:38:31.477440   21811 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 10:38:31.477513   21811 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 10:38:31.477590   21811 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 10:38:31.477601   21811 kubeadm.go:309] 
	I0610 10:38:31.477680   21811 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 10:38:31.477692   21811 kubeadm.go:309] 
	I0610 10:38:31.477762   21811 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 10:38:31.477775   21811 kubeadm.go:309] 
	I0610 10:38:31.477848   21811 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 10:38:31.477945   21811 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 10:38:31.478038   21811 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 10:38:31.478047   21811 kubeadm.go:309] 
	I0610 10:38:31.478145   21811 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 10:38:31.478253   21811 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 10:38:31.478266   21811 kubeadm.go:309] 
	I0610 10:38:31.478366   21811 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token e9zf9o.slxtdaq0q60d023m \
	I0610 10:38:31.478487   21811 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 10:38:31.478517   21811 kubeadm.go:309] 	--control-plane 
	I0610 10:38:31.478522   21811 kubeadm.go:309] 
	I0610 10:38:31.478593   21811 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 10:38:31.478599   21811 kubeadm.go:309] 
	I0610 10:38:31.478681   21811 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token e9zf9o.slxtdaq0q60d023m \
	I0610 10:38:31.478790   21811 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 10:38:31.479029   21811 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 10:38:31.479047   21811 cni.go:84] Creating CNI manager for ""
	I0610 10:38:31.479055   21811 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0610 10:38:31.480724   21811 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0610 10:38:31.482150   21811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 10:38:31.487108   21811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 10:38:31.487122   21811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0610 10:38:31.506446   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 10:38:31.867007   21811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 10:38:31.867131   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:31.867189   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565925 minikube.k8s.io/updated_at=2024_06_10T10_38_31_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-565925 minikube.k8s.io/primary=true
	I0610 10:38:32.044591   21811 ops.go:34] apiserver oom_adj: -16
	I0610 10:38:32.044759   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:32.545053   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:33.045090   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:33.545542   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:34.045821   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:34.545080   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:35.045408   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:35.545827   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:36.045121   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:36.545649   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:37.045675   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:37.545113   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:38.045504   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:38.544868   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:39.044773   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:39.545795   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:40.044900   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:40.545229   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:41.045782   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:41.544927   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:42.045663   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:42.545200   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:43.044832   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:43.544974   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 10:38:43.627996   21811 kubeadm.go:1107] duration metric: took 11.760906967s to wait for elevateKubeSystemPrivileges
	W0610 10:38:43.628041   21811 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 10:38:43.628052   21811 kubeadm.go:393] duration metric: took 23.027508956s to StartCluster
	I0610 10:38:43.628074   21811 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:43.628168   21811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:38:43.628798   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:38:43.629098   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 10:38:43.629108   21811 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 10:38:43.629163   21811 addons.go:69] Setting storage-provisioner=true in profile "ha-565925"
	I0610 10:38:43.629090   21811 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:38:43.629200   21811 addons.go:234] Setting addon storage-provisioner=true in "ha-565925"
	I0610 10:38:43.629210   21811 addons.go:69] Setting default-storageclass=true in profile "ha-565925"
	I0610 10:38:43.629237   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:38:43.629242   21811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-565925"
	I0610 10:38:43.629201   21811 start.go:240] waiting for startup goroutines ...
	I0610 10:38:43.629326   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:38:43.629630   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:43.629661   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:43.629701   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:43.629749   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:43.644451   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45991
	I0610 10:38:43.644595   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37057
	I0610 10:38:43.644888   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:43.644910   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:43.645369   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:43.645395   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:43.645591   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:43.645613   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:43.645679   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:43.645950   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:43.646162   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:38:43.646276   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:43.646304   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:43.648486   21811 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:38:43.648689   21811 kapi.go:59] client config for ha-565925: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt", KeyFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key", CAFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 10:38:43.649117   21811 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 10:38:43.649284   21811 addons.go:234] Setting addon default-storageclass=true in "ha-565925"
	I0610 10:38:43.649313   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:38:43.649542   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:43.649566   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:43.661386   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I0610 10:38:43.661777   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:43.662315   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:43.662335   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:43.662683   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:43.662844   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:38:43.663363   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41383
	I0610 10:38:43.663824   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:43.664475   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:43.664490   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:43.664584   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:43.666822   21811 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 10:38:43.665077   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:43.668074   21811 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:38:43.668094   21811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 10:38:43.668111   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:43.668663   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:43.668711   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:43.671104   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:43.671506   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:43.671529   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:43.671863   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:43.672028   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:43.672157   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:43.672312   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:43.684209   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
	I0610 10:38:43.684652   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:43.685166   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:43.685204   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:43.685521   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:43.685714   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:38:43.687447   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:38:43.687710   21811 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 10:38:43.687724   21811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 10:38:43.687738   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:38:43.690055   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:43.690391   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:38:43.690422   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:38:43.690582   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:38:43.690753   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:38:43.690865   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:38:43.690984   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:38:43.747501   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 10:38:43.815096   21811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 10:38:43.829882   21811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 10:38:44.225366   21811 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0610 10:38:44.225458   21811 main.go:141] libmachine: Making call to close driver server
	I0610 10:38:44.225483   21811 main.go:141] libmachine: (ha-565925) Calling .Close
	I0610 10:38:44.225775   21811 main.go:141] libmachine: (ha-565925) DBG | Closing plugin on server side
	I0610 10:38:44.225801   21811 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:38:44.225829   21811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:38:44.225851   21811 main.go:141] libmachine: Making call to close driver server
	I0610 10:38:44.225860   21811 main.go:141] libmachine: (ha-565925) Calling .Close
	I0610 10:38:44.226116   21811 main.go:141] libmachine: (ha-565925) DBG | Closing plugin on server side
	I0610 10:38:44.226172   21811 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:38:44.226186   21811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:38:44.226297   21811 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0610 10:38:44.226311   21811 round_trippers.go:469] Request Headers:
	I0610 10:38:44.226323   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:38:44.226332   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:38:44.240471   21811 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0610 10:38:44.241103   21811 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0610 10:38:44.241118   21811 round_trippers.go:469] Request Headers:
	I0610 10:38:44.241126   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:38:44.241131   21811 round_trippers.go:473]     Content-Type: application/json
	I0610 10:38:44.241134   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:38:44.243493   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:38:44.243669   21811 main.go:141] libmachine: Making call to close driver server
	I0610 10:38:44.243685   21811 main.go:141] libmachine: (ha-565925) Calling .Close
	I0610 10:38:44.243948   21811 main.go:141] libmachine: (ha-565925) DBG | Closing plugin on server side
	I0610 10:38:44.243976   21811 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:38:44.243985   21811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:38:44.446679   21811 main.go:141] libmachine: Making call to close driver server
	I0610 10:38:44.446716   21811 main.go:141] libmachine: (ha-565925) Calling .Close
	I0610 10:38:44.447048   21811 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:38:44.447075   21811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:38:44.447086   21811 main.go:141] libmachine: Making call to close driver server
	I0610 10:38:44.447101   21811 main.go:141] libmachine: (ha-565925) Calling .Close
	I0610 10:38:44.447356   21811 main.go:141] libmachine: Successfully made call to close driver server
	I0610 10:38:44.447384   21811 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 10:38:44.447368   21811 main.go:141] libmachine: (ha-565925) DBG | Closing plugin on server side
	I0610 10:38:44.449662   21811 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0610 10:38:44.450910   21811 addons.go:510] duration metric: took 821.796595ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0610 10:38:44.450948   21811 start.go:245] waiting for cluster config update ...
	I0610 10:38:44.450963   21811 start.go:254] writing updated cluster config ...
	I0610 10:38:44.452537   21811 out.go:177] 
	I0610 10:38:44.454465   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:38:44.454535   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:38:44.456198   21811 out.go:177] * Starting "ha-565925-m02" control-plane node in "ha-565925" cluster
	I0610 10:38:44.457305   21811 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:38:44.457329   21811 cache.go:56] Caching tarball of preloaded images
	I0610 10:38:44.457415   21811 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:38:44.457428   21811 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:38:44.457500   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:38:44.457661   21811 start.go:360] acquireMachinesLock for ha-565925-m02: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:38:44.457702   21811 start.go:364] duration metric: took 22.998µs to acquireMachinesLock for "ha-565925-m02"
	I0610 10:38:44.457719   21811 start.go:93] Provisioning new machine with config: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:38:44.457782   21811 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0610 10:38:44.459263   21811 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:38:44.459339   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:38:44.459362   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:38:44.473672   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38629
	I0610 10:38:44.474063   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:38:44.474521   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:38:44.474540   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:38:44.474850   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:38:44.475045   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetMachineName
	I0610 10:38:44.475214   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:38:44.475368   21811 start.go:159] libmachine.API.Create for "ha-565925" (driver="kvm2")
	I0610 10:38:44.475390   21811 client.go:168] LocalClient.Create starting
	I0610 10:38:44.475421   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 10:38:44.475457   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:38:44.475472   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:38:44.475539   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 10:38:44.475563   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:38:44.475575   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:38:44.475605   21811 main.go:141] libmachine: Running pre-create checks...
	I0610 10:38:44.475617   21811 main.go:141] libmachine: (ha-565925-m02) Calling .PreCreateCheck
	I0610 10:38:44.475759   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetConfigRaw
	I0610 10:38:44.476100   21811 main.go:141] libmachine: Creating machine...
	I0610 10:38:44.476113   21811 main.go:141] libmachine: (ha-565925-m02) Calling .Create
	I0610 10:38:44.476220   21811 main.go:141] libmachine: (ha-565925-m02) Creating KVM machine...
	I0610 10:38:44.477399   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found existing default KVM network
	I0610 10:38:44.477598   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found existing private KVM network mk-ha-565925
	I0610 10:38:44.477769   21811 main.go:141] libmachine: (ha-565925-m02) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02 ...
	I0610 10:38:44.477792   21811 main.go:141] libmachine: (ha-565925-m02) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 10:38:44.477817   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:44.477729   22211 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:38:44.477904   21811 main.go:141] libmachine: (ha-565925-m02) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 10:38:44.706036   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:44.705903   22211 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa...
	I0610 10:38:45.145834   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:45.145701   22211 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/ha-565925-m02.rawdisk...
	I0610 10:38:45.145871   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Writing magic tar header
	I0610 10:38:45.145888   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Writing SSH key tar header
	I0610 10:38:45.145910   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:45.145836   22211 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02 ...
	I0610 10:38:45.145995   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02
	I0610 10:38:45.146025   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02 (perms=drwx------)
	I0610 10:38:45.146038   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 10:38:45.146050   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:38:45.146057   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 10:38:45.146066   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 10:38:45.146075   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home/jenkins
	I0610 10:38:45.146085   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Checking permissions on dir: /home
	I0610 10:38:45.146096   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Skipping /home - not owner
	I0610 10:38:45.146108   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 10:38:45.146123   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 10:38:45.146130   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 10:38:45.146141   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 10:38:45.146146   21811 main.go:141] libmachine: (ha-565925-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 10:38:45.146154   21811 main.go:141] libmachine: (ha-565925-m02) Creating domain...
	I0610 10:38:45.147168   21811 main.go:141] libmachine: (ha-565925-m02) define libvirt domain using xml: 
	I0610 10:38:45.147192   21811 main.go:141] libmachine: (ha-565925-m02) <domain type='kvm'>
	I0610 10:38:45.147202   21811 main.go:141] libmachine: (ha-565925-m02)   <name>ha-565925-m02</name>
	I0610 10:38:45.147210   21811 main.go:141] libmachine: (ha-565925-m02)   <memory unit='MiB'>2200</memory>
	I0610 10:38:45.147219   21811 main.go:141] libmachine: (ha-565925-m02)   <vcpu>2</vcpu>
	I0610 10:38:45.147227   21811 main.go:141] libmachine: (ha-565925-m02)   <features>
	I0610 10:38:45.147235   21811 main.go:141] libmachine: (ha-565925-m02)     <acpi/>
	I0610 10:38:45.147246   21811 main.go:141] libmachine: (ha-565925-m02)     <apic/>
	I0610 10:38:45.147254   21811 main.go:141] libmachine: (ha-565925-m02)     <pae/>
	I0610 10:38:45.147266   21811 main.go:141] libmachine: (ha-565925-m02)     
	I0610 10:38:45.147272   21811 main.go:141] libmachine: (ha-565925-m02)   </features>
	I0610 10:38:45.147280   21811 main.go:141] libmachine: (ha-565925-m02)   <cpu mode='host-passthrough'>
	I0610 10:38:45.147284   21811 main.go:141] libmachine: (ha-565925-m02)   
	I0610 10:38:45.147291   21811 main.go:141] libmachine: (ha-565925-m02)   </cpu>
	I0610 10:38:45.147296   21811 main.go:141] libmachine: (ha-565925-m02)   <os>
	I0610 10:38:45.147302   21811 main.go:141] libmachine: (ha-565925-m02)     <type>hvm</type>
	I0610 10:38:45.147307   21811 main.go:141] libmachine: (ha-565925-m02)     <boot dev='cdrom'/>
	I0610 10:38:45.147313   21811 main.go:141] libmachine: (ha-565925-m02)     <boot dev='hd'/>
	I0610 10:38:45.147318   21811 main.go:141] libmachine: (ha-565925-m02)     <bootmenu enable='no'/>
	I0610 10:38:45.147325   21811 main.go:141] libmachine: (ha-565925-m02)   </os>
	I0610 10:38:45.147330   21811 main.go:141] libmachine: (ha-565925-m02)   <devices>
	I0610 10:38:45.147338   21811 main.go:141] libmachine: (ha-565925-m02)     <disk type='file' device='cdrom'>
	I0610 10:38:45.147347   21811 main.go:141] libmachine: (ha-565925-m02)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/boot2docker.iso'/>
	I0610 10:38:45.147362   21811 main.go:141] libmachine: (ha-565925-m02)       <target dev='hdc' bus='scsi'/>
	I0610 10:38:45.147370   21811 main.go:141] libmachine: (ha-565925-m02)       <readonly/>
	I0610 10:38:45.147377   21811 main.go:141] libmachine: (ha-565925-m02)     </disk>
	I0610 10:38:45.147391   21811 main.go:141] libmachine: (ha-565925-m02)     <disk type='file' device='disk'>
	I0610 10:38:45.147402   21811 main.go:141] libmachine: (ha-565925-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 10:38:45.147410   21811 main.go:141] libmachine: (ha-565925-m02)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/ha-565925-m02.rawdisk'/>
	I0610 10:38:45.147417   21811 main.go:141] libmachine: (ha-565925-m02)       <target dev='hda' bus='virtio'/>
	I0610 10:38:45.147422   21811 main.go:141] libmachine: (ha-565925-m02)     </disk>
	I0610 10:38:45.147435   21811 main.go:141] libmachine: (ha-565925-m02)     <interface type='network'>
	I0610 10:38:45.147448   21811 main.go:141] libmachine: (ha-565925-m02)       <source network='mk-ha-565925'/>
	I0610 10:38:45.147458   21811 main.go:141] libmachine: (ha-565925-m02)       <model type='virtio'/>
	I0610 10:38:45.147471   21811 main.go:141] libmachine: (ha-565925-m02)     </interface>
	I0610 10:38:45.147484   21811 main.go:141] libmachine: (ha-565925-m02)     <interface type='network'>
	I0610 10:38:45.147493   21811 main.go:141] libmachine: (ha-565925-m02)       <source network='default'/>
	I0610 10:38:45.147498   21811 main.go:141] libmachine: (ha-565925-m02)       <model type='virtio'/>
	I0610 10:38:45.147505   21811 main.go:141] libmachine: (ha-565925-m02)     </interface>
	I0610 10:38:45.147510   21811 main.go:141] libmachine: (ha-565925-m02)     <serial type='pty'>
	I0610 10:38:45.147518   21811 main.go:141] libmachine: (ha-565925-m02)       <target port='0'/>
	I0610 10:38:45.147528   21811 main.go:141] libmachine: (ha-565925-m02)     </serial>
	I0610 10:38:45.147540   21811 main.go:141] libmachine: (ha-565925-m02)     <console type='pty'>
	I0610 10:38:45.147554   21811 main.go:141] libmachine: (ha-565925-m02)       <target type='serial' port='0'/>
	I0610 10:38:45.147564   21811 main.go:141] libmachine: (ha-565925-m02)     </console>
	I0610 10:38:45.147573   21811 main.go:141] libmachine: (ha-565925-m02)     <rng model='virtio'>
	I0610 10:38:45.147585   21811 main.go:141] libmachine: (ha-565925-m02)       <backend model='random'>/dev/random</backend>
	I0610 10:38:45.147593   21811 main.go:141] libmachine: (ha-565925-m02)     </rng>
	I0610 10:38:45.147604   21811 main.go:141] libmachine: (ha-565925-m02)     
	I0610 10:38:45.147614   21811 main.go:141] libmachine: (ha-565925-m02)     
	I0610 10:38:45.147640   21811 main.go:141] libmachine: (ha-565925-m02)   </devices>
	I0610 10:38:45.147662   21811 main.go:141] libmachine: (ha-565925-m02) </domain>
	I0610 10:38:45.147676   21811 main.go:141] libmachine: (ha-565925-m02) 
	I0610 10:38:45.154092   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:5e:8a:ca in network default
	I0610 10:38:45.154668   21811 main.go:141] libmachine: (ha-565925-m02) Ensuring networks are active...
	I0610 10:38:45.154693   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:45.155410   21811 main.go:141] libmachine: (ha-565925-m02) Ensuring network default is active
	I0610 10:38:45.155685   21811 main.go:141] libmachine: (ha-565925-m02) Ensuring network mk-ha-565925 is active
	I0610 10:38:45.156099   21811 main.go:141] libmachine: (ha-565925-m02) Getting domain xml...
	I0610 10:38:45.156771   21811 main.go:141] libmachine: (ha-565925-m02) Creating domain...
	I0610 10:38:46.358608   21811 main.go:141] libmachine: (ha-565925-m02) Waiting to get IP...
	I0610 10:38:46.359386   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:46.359869   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:46.359898   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:46.359834   22211 retry.go:31] will retry after 263.074572ms: waiting for machine to come up
	I0610 10:38:46.624279   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:46.624842   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:46.624872   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:46.624799   22211 retry.go:31] will retry after 257.651083ms: waiting for machine to come up
	I0610 10:38:46.884256   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:46.884717   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:46.884745   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:46.884673   22211 retry.go:31] will retry after 394.193995ms: waiting for machine to come up
	I0610 10:38:47.280088   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:47.280587   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:47.280617   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:47.280522   22211 retry.go:31] will retry after 458.928377ms: waiting for machine to come up
	I0610 10:38:47.741103   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:47.741634   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:47.741663   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:47.741596   22211 retry.go:31] will retry after 464.110472ms: waiting for machine to come up
	I0610 10:38:48.207484   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:48.208444   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:48.208476   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:48.208399   22211 retry.go:31] will retry after 679.15084ms: waiting for machine to come up
	I0610 10:38:48.888988   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:48.889404   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:48.889427   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:48.889356   22211 retry.go:31] will retry after 817.452236ms: waiting for machine to come up
	I0610 10:38:49.708579   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:49.709093   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:49.709123   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:49.709033   22211 retry.go:31] will retry after 1.243856521s: waiting for machine to come up
	I0610 10:38:50.954152   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:50.954633   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:50.954660   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:50.954587   22211 retry.go:31] will retry after 1.365236787s: waiting for machine to come up
	I0610 10:38:52.322096   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:52.322506   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:52.322520   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:52.322475   22211 retry.go:31] will retry after 1.597490731s: waiting for machine to come up
	I0610 10:38:53.922196   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:53.922598   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:53.922624   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:53.922547   22211 retry.go:31] will retry after 2.80774575s: waiting for machine to come up
	I0610 10:38:56.732630   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:56.733049   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:56.733071   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:56.732999   22211 retry.go:31] will retry after 2.939623483s: waiting for machine to come up
	I0610 10:38:59.674486   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:38:59.674976   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:38:59.675008   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:38:59.674921   22211 retry.go:31] will retry after 2.809876254s: waiting for machine to come up
	I0610 10:39:02.487793   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:02.488160   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find current IP address of domain ha-565925-m02 in network mk-ha-565925
	I0610 10:39:02.488183   21811 main.go:141] libmachine: (ha-565925-m02) DBG | I0610 10:39:02.488134   22211 retry.go:31] will retry after 4.506866771s: waiting for machine to come up
	I0610 10:39:06.997754   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:06.998215   21811 main.go:141] libmachine: (ha-565925-m02) Found IP for machine: 192.168.39.230
	I0610 10:39:06.998231   21811 main.go:141] libmachine: (ha-565925-m02) Reserving static IP address...
	I0610 10:39:06.998242   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has current primary IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:06.998686   21811 main.go:141] libmachine: (ha-565925-m02) DBG | unable to find host DHCP lease matching {name: "ha-565925-m02", mac: "52:54:00:c0:fd:0f", ip: "192.168.39.230"} in network mk-ha-565925
	I0610 10:39:07.071381   21811 main.go:141] libmachine: (ha-565925-m02) Reserved static IP address: 192.168.39.230
	I0610 10:39:07.071407   21811 main.go:141] libmachine: (ha-565925-m02) Waiting for SSH to be available...
	I0610 10:39:07.071417   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Getting to WaitForSSH function...
	I0610 10:39:07.074169   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.074624   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.074652   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.074751   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Using SSH client type: external
	I0610 10:39:07.074774   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa (-rw-------)
	I0610 10:39:07.074811   21811 main.go:141] libmachine: (ha-565925-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:39:07.074824   21811 main.go:141] libmachine: (ha-565925-m02) DBG | About to run SSH command:
	I0610 10:39:07.074886   21811 main.go:141] libmachine: (ha-565925-m02) DBG | exit 0
	I0610 10:39:07.200853   21811 main.go:141] libmachine: (ha-565925-m02) DBG | SSH cmd err, output: <nil>: 
	I0610 10:39:07.201150   21811 main.go:141] libmachine: (ha-565925-m02) KVM machine creation complete!
	I0610 10:39:07.201495   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetConfigRaw
	I0610 10:39:07.202104   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:07.202334   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:07.202505   21811 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 10:39:07.202521   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:39:07.203730   21811 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 10:39:07.203745   21811 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 10:39:07.203753   21811 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 10:39:07.203761   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.206128   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.206463   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.206488   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.206630   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.206799   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.206967   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.207154   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.207301   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:07.207520   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:07.207533   21811 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 10:39:07.320080   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:39:07.320100   21811 main.go:141] libmachine: Detecting the provisioner...
	I0610 10:39:07.320109   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.322974   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.323356   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.323388   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.323479   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.323658   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.323847   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.323992   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.324264   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:07.324429   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:07.324440   21811 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 10:39:07.433331   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 10:39:07.433413   21811 main.go:141] libmachine: found compatible host: buildroot
	I0610 10:39:07.433429   21811 main.go:141] libmachine: Provisioning with buildroot...
	I0610 10:39:07.433441   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetMachineName
	I0610 10:39:07.433729   21811 buildroot.go:166] provisioning hostname "ha-565925-m02"
	I0610 10:39:07.433758   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetMachineName
	I0610 10:39:07.433956   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.436807   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.437300   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.437330   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.437511   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.437696   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.437874   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.438015   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.438219   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:07.438436   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:07.438458   21811 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925-m02 && echo "ha-565925-m02" | sudo tee /etc/hostname
	I0610 10:39:07.562817   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925-m02
	
	I0610 10:39:07.562849   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.565629   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.565944   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.565971   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.566151   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.566335   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.566483   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.566610   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.566793   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:07.566942   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:07.566962   21811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:39:07.680972   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:39:07.681003   21811 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:39:07.681026   21811 buildroot.go:174] setting up certificates
	I0610 10:39:07.681037   21811 provision.go:84] configureAuth start
	I0610 10:39:07.681049   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetMachineName
	I0610 10:39:07.681343   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:39:07.684015   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.684354   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.684385   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.684538   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.686863   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.687282   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.687312   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.687479   21811 provision.go:143] copyHostCerts
	I0610 10:39:07.687506   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:39:07.687535   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 10:39:07.687541   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:39:07.687597   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:39:07.687669   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:39:07.687686   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 10:39:07.687692   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:39:07.687715   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:39:07.687755   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:39:07.687771   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 10:39:07.687777   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:39:07.687797   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:39:07.687843   21811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925-m02 san=[127.0.0.1 192.168.39.230 ha-565925-m02 localhost minikube]
	I0610 10:39:07.787236   21811 provision.go:177] copyRemoteCerts
	I0610 10:39:07.787289   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:39:07.787309   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.790084   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.790474   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.790504   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.790655   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.790797   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.790925   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.791097   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	I0610 10:39:07.874638   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 10:39:07.874703   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:39:07.896656   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 10:39:07.896718   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 10:39:07.919401   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 10:39:07.919464   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 10:39:07.944002   21811 provision.go:87] duration metric: took 262.952427ms to configureAuth
	I0610 10:39:07.944029   21811 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:39:07.944222   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:39:07.944310   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:07.946955   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.947346   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:07.947377   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:07.947579   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:07.947732   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.947888   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:07.947993   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:07.948173   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:07.948331   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:07.948343   21811 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:39:08.222700   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:39:08.222729   21811 main.go:141] libmachine: Checking connection to Docker...
	I0610 10:39:08.222736   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetURL
	I0610 10:39:08.224193   21811 main.go:141] libmachine: (ha-565925-m02) DBG | Using libvirt version 6000000
	I0610 10:39:08.226332   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.226683   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.226715   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.226820   21811 main.go:141] libmachine: Docker is up and running!
	I0610 10:39:08.226833   21811 main.go:141] libmachine: Reticulating splines...
	I0610 10:39:08.226840   21811 client.go:171] duration metric: took 23.751443228s to LocalClient.Create
	I0610 10:39:08.226861   21811 start.go:167] duration metric: took 23.751493974s to libmachine.API.Create "ha-565925"
	I0610 10:39:08.226874   21811 start.go:293] postStartSetup for "ha-565925-m02" (driver="kvm2")
	I0610 10:39:08.226889   21811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:39:08.226910   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:08.227190   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:39:08.227224   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:08.229415   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.229716   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.229739   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.229873   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:08.230069   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:08.230219   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:08.230359   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	I0610 10:39:08.315120   21811 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:39:08.319099   21811 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:39:08.319128   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:39:08.319210   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:39:08.319286   21811 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 10:39:08.319295   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 10:39:08.319370   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 10:39:08.328529   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:39:08.351550   21811 start.go:296] duration metric: took 124.656239ms for postStartSetup
	I0610 10:39:08.351593   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetConfigRaw
	I0610 10:39:08.352278   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:39:08.354818   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.355275   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.355306   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.355509   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:39:08.355685   21811 start.go:128] duration metric: took 23.897893274s to createHost
	I0610 10:39:08.355706   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:08.357933   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.358236   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.358262   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.358361   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:08.358556   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:08.358690   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:08.358788   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:08.358930   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:39:08.359120   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0610 10:39:08.359134   21811 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:39:08.465359   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718015948.439565666
	
	I0610 10:39:08.465390   21811 fix.go:216] guest clock: 1718015948.439565666
	I0610 10:39:08.465400   21811 fix.go:229] Guest: 2024-06-10 10:39:08.439565666 +0000 UTC Remote: 2024-06-10 10:39:08.355695611 +0000 UTC m=+77.141782194 (delta=83.870055ms)
	I0610 10:39:08.465419   21811 fix.go:200] guest clock delta is within tolerance: 83.870055ms
	I0610 10:39:08.465424   21811 start.go:83] releasing machines lock for "ha-565925-m02", held for 24.007713656s
	I0610 10:39:08.465441   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:08.465733   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:39:08.468437   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.468743   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.468769   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.471085   21811 out.go:177] * Found network options:
	I0610 10:39:08.472391   21811 out.go:177]   - NO_PROXY=192.168.39.208
	W0610 10:39:08.473475   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 10:39:08.473514   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:08.474053   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:08.474246   21811 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:39:08.474312   21811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:39:08.474351   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	W0610 10:39:08.474427   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 10:39:08.474480   21811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:39:08.474495   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:39:08.477592   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.477691   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.477969   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.477998   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.478085   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:08.478107   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:08.478197   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:08.478323   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:39:08.478400   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:08.478466   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:39:08.478535   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:08.478600   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:39:08.478693   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	I0610 10:39:08.478812   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	I0610 10:39:08.722285   21811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:39:08.728263   21811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:39:08.728339   21811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:39:08.744058   21811 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 10:39:08.744081   21811 start.go:494] detecting cgroup driver to use...
	I0610 10:39:08.744146   21811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:39:08.761715   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:39:08.775199   21811 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:39:08.775260   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:39:08.789061   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:39:08.802987   21811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:39:08.935904   21811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:39:09.078033   21811 docker.go:233] disabling docker service ...
	I0610 10:39:09.078110   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:39:09.093795   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:39:09.107299   21811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:39:09.257599   21811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:39:09.381188   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:39:09.395395   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:39:09.413435   21811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:39:09.413493   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.423621   21811 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:39:09.423678   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.433604   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.445821   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.456663   21811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:39:09.466774   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.476562   21811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.492573   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:39:09.502454   21811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:39:09.511065   21811 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 10:39:09.511117   21811 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 10:39:09.522654   21811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:39:09.532117   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:39:09.655738   21811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:39:09.788645   21811 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:39:09.788720   21811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:39:09.793973   21811 start.go:562] Will wait 60s for crictl version
	I0610 10:39:09.794028   21811 ssh_runner.go:195] Run: which crictl
	I0610 10:39:09.797564   21811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:39:09.834595   21811 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:39:09.834660   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:39:09.864781   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:39:09.893856   21811 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:39:09.895407   21811 out.go:177]   - env NO_PROXY=192.168.39.208
	I0610 10:39:09.896638   21811 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:39:09.899419   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:09.899843   21811 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:58 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:39:09.899869   21811 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:39:09.900167   21811 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:39:09.904123   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:39:09.916287   21811 mustload.go:65] Loading cluster: ha-565925
	I0610 10:39:09.916463   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:39:09.916690   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:39:09.916715   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:39:09.931688   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40919
	I0610 10:39:09.932103   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:39:09.932559   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:39:09.932580   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:39:09.932874   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:39:09.933093   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:39:09.934585   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:39:09.934847   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:39:09.934869   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:39:09.949008   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42831
	I0610 10:39:09.949398   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:39:09.949823   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:39:09.949841   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:39:09.950165   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:39:09.950358   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:39:09.950532   21811 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.230
	I0610 10:39:09.950542   21811 certs.go:194] generating shared ca certs ...
	I0610 10:39:09.950557   21811 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:39:09.950682   21811 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:39:09.950738   21811 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:39:09.950751   21811 certs.go:256] generating profile certs ...
	I0610 10:39:09.950831   21811 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 10:39:09.950864   21811 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8484982c
	I0610 10:39:09.950883   21811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8484982c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.230 192.168.39.254]
	I0610 10:39:10.074645   21811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8484982c ...
	I0610 10:39:10.074672   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8484982c: {Name:mk6b6dcda4e45bea2edd4c7720b62d681e4e7bdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:39:10.074858   21811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8484982c ...
	I0610 10:39:10.074877   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8484982c: {Name:mk0af6f9fe1bbf80810ba512a39e7977f0d9fb54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:39:10.074969   21811 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.8484982c -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 10:39:10.075124   21811 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.8484982c -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 10:39:10.075296   21811 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 10:39:10.075316   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:39:10.075334   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:39:10.075354   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:39:10.075372   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:39:10.075388   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:39:10.075404   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:39:10.075460   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:39:10.075486   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:39:10.075550   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 10:39:10.075590   21811 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 10:39:10.075603   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:39:10.075637   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:39:10.075669   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:39:10.075698   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:39:10.075752   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:39:10.075786   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:39:10.075805   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 10:39:10.075822   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 10:39:10.075862   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:39:10.078847   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:39:10.079250   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:39:10.079282   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:39:10.079389   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:39:10.079593   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:39:10.079716   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:39:10.079850   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:39:10.153380   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0610 10:39:10.157647   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0610 10:39:10.168332   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0610 10:39:10.171943   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0610 10:39:10.182024   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0610 10:39:10.185959   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0610 10:39:10.195911   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0610 10:39:10.199956   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0610 10:39:10.209493   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0610 10:39:10.213095   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0610 10:39:10.222774   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0610 10:39:10.226786   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0610 10:39:10.237835   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:39:10.262884   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:39:10.284815   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:39:10.309285   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:39:10.331709   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0610 10:39:10.354663   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 10:39:10.376921   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:39:10.399148   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 10:39:10.420770   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:39:10.442307   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 10:39:10.463860   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 10:39:10.484893   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0610 10:39:10.499993   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0610 10:39:10.514852   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0610 10:39:10.531002   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0610 10:39:10.545985   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0610 10:39:10.560631   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0610 10:39:10.575797   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0610 10:39:10.592285   21811 ssh_runner.go:195] Run: openssl version
	I0610 10:39:10.597801   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 10:39:10.610697   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 10:39:10.614973   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 10:39:10.615022   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 10:39:10.621057   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 10:39:10.632134   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 10:39:10.643365   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 10:39:10.647813   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 10:39:10.647866   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 10:39:10.653463   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:39:10.663550   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:39:10.673192   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:39:10.677321   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:39:10.677370   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:39:10.682626   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:39:10.693262   21811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:39:10.697029   21811 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 10:39:10.697083   21811 kubeadm.go:928] updating node {m02 192.168.39.230 8443 v1.30.1 crio true true} ...
	I0610 10:39:10.697178   21811 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:39:10.697210   21811 kube-vip.go:115] generating kube-vip config ...
	I0610 10:39:10.697245   21811 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 10:39:10.714012   21811 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 10:39:10.714073   21811 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 10:39:10.714119   21811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:39:10.723444   21811 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 10:39:10.723513   21811 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 10:39:10.732583   21811 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0610 10:39:10.732612   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 10:39:10.732640   21811 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0610 10:39:10.732672   21811 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0610 10:39:10.732682   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 10:39:10.736809   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 10:39:10.736838   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 10:39:18.739027   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 10:39:18.739108   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 10:39:18.743681   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 10:39:18.743722   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 10:39:27.118467   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:39:27.132828   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 10:39:27.132917   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 10:39:27.137087   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 10:39:27.137124   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 10:39:27.506676   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0610 10:39:27.516027   21811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0610 10:39:27.532290   21811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:39:27.548268   21811 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 10:39:27.564880   21811 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 10:39:27.568734   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:39:27.580388   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:39:27.719636   21811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:39:27.737698   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:39:27.738032   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:39:27.738071   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:39:27.752801   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42527
	I0610 10:39:27.753218   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:39:27.753721   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:39:27.753746   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:39:27.754078   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:39:27.754285   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:39:27.754455   21811 start.go:316] joinCluster: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:39:27.754549   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 10:39:27.754567   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:39:27.757868   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:39:27.758394   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:39:27.758417   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:39:27.758672   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:39:27.758853   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:39:27.759017   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:39:27.759898   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:39:27.932467   21811 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:39:27.932518   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mirzni.yjdf9m9snyreq4hg --discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565925-m02 --control-plane --apiserver-advertise-address=192.168.39.230 --apiserver-bind-port=8443"
	I0610 10:39:49.366745   21811 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token mirzni.yjdf9m9snyreq4hg --discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565925-m02 --control-plane --apiserver-advertise-address=192.168.39.230 --apiserver-bind-port=8443": (21.434201742s)
	I0610 10:39:49.366782   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 10:39:49.936205   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565925-m02 minikube.k8s.io/updated_at=2024_06_10T10_39_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-565925 minikube.k8s.io/primary=false
	I0610 10:39:50.059102   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565925-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0610 10:39:50.164744   21811 start.go:318] duration metric: took 22.410284983s to joinCluster
	I0610 10:39:50.164838   21811 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:39:50.166487   21811 out.go:177] * Verifying Kubernetes components...
	I0610 10:39:50.165194   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:39:50.167939   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:39:50.440343   21811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:39:50.502388   21811 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:39:50.502632   21811 kapi.go:59] client config for ha-565925: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt", KeyFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key", CAFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0610 10:39:50.502691   21811 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I0610 10:39:50.502936   21811 node_ready.go:35] waiting up to 6m0s for node "ha-565925-m02" to be "Ready" ...
	I0610 10:39:50.503017   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:50.503029   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:50.503039   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:50.503045   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:50.514120   21811 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0610 10:39:51.004139   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:51.004164   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:51.004176   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:51.004181   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:51.010316   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:39:51.504133   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:51.504154   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:51.504162   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:51.504165   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:51.508181   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:52.004000   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:52.004019   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:52.004026   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:52.004030   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:52.007220   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:52.503311   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:52.503332   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:52.503339   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:52.503343   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:52.565666   21811 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I0610 10:39:52.566288   21811 node_ready.go:53] node "ha-565925-m02" has status "Ready":"False"
	I0610 10:39:53.004046   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:53.004065   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:53.004073   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:53.004077   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:53.007233   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:53.503757   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:53.503778   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:53.503785   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:53.503788   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:53.507332   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:54.003676   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:54.003702   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:54.003713   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:54.003719   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:54.009350   21811 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:39:54.503153   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:54.503199   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:54.503209   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:54.503215   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:54.506503   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:55.003474   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:55.003500   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:55.003512   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:55.003518   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:55.007184   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:55.008071   21811 node_ready.go:53] node "ha-565925-m02" has status "Ready":"False"
	I0610 10:39:55.503386   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:55.503408   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:55.503416   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:55.503419   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:55.506765   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:56.003563   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:56.003583   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:56.003591   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:56.003595   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:56.007630   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:39:56.503451   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:56.503478   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:56.503488   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:56.503493   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:56.507452   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:57.003293   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:57.003313   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:57.003321   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:57.003325   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:57.006086   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:57.503183   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:57.503206   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:57.503214   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:57.503219   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:57.506997   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:57.508088   21811 node_ready.go:53] node "ha-565925-m02" has status "Ready":"False"
	I0610 10:39:58.003837   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:58.003857   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.003863   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.003867   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.007311   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:58.008132   21811 node_ready.go:49] node "ha-565925-m02" has status "Ready":"True"
	I0610 10:39:58.008150   21811 node_ready.go:38] duration metric: took 7.505198344s for node "ha-565925-m02" to be "Ready" ...
	I0610 10:39:58.008158   21811 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:39:58.008248   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:39:58.008257   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.008263   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.008266   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.015011   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:39:58.023036   21811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.023115   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 10:39:58.023128   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.023138   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.023145   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.025950   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.027016   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:39:58.027033   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.027040   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.027044   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.029596   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.030412   21811 pod_ready.go:92] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"True"
	I0610 10:39:58.030428   21811 pod_ready.go:81] duration metric: took 7.36967ms for pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.030436   21811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.030480   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wn6nh
	I0610 10:39:58.030492   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.030499   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.030504   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.033313   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.033962   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:39:58.033983   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.033990   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.033993   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.036194   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.036738   21811 pod_ready.go:92] pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace has status "Ready":"True"
	I0610 10:39:58.036756   21811 pod_ready.go:81] duration metric: took 6.31506ms for pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.036765   21811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.036808   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925
	I0610 10:39:58.036815   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.036837   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.036842   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.039110   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.039765   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:39:58.039784   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.039793   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.039800   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.042406   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.043194   21811 pod_ready.go:92] pod "etcd-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:39:58.043214   21811 pod_ready.go:81] duration metric: took 6.442915ms for pod "etcd-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.043226   21811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:58.043286   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m02
	I0610 10:39:58.043298   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.043308   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.043314   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.045880   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.046485   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:58.046503   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.046513   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.046519   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.048890   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:58.543724   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m02
	I0610 10:39:58.543751   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.543763   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.543771   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.547201   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:58.547764   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:58.547781   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:58.547788   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:58.547792   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:58.550608   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:59.043466   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m02
	I0610 10:39:59.043489   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.043497   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.043500   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.046573   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:59.047129   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:59.047144   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.047151   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.047156   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.049633   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:59.050034   21811 pod_ready.go:92] pod "etcd-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:39:59.050050   21811 pod_ready.go:81] duration metric: took 1.006817413s for pod "etcd-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:59.050063   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:59.050106   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925
	I0610 10:39:59.050117   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.050125   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.050131   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.052559   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:39:59.204496   21811 request.go:629] Waited for 151.324356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:39:59.204548   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:39:59.204553   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.204560   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.204564   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.207767   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:59.208449   21811 pod_ready.go:92] pod "kube-apiserver-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:39:59.208478   21811 pod_ready.go:81] duration metric: took 158.407888ms for pod "kube-apiserver-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:59.208492   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:39:59.403868   21811 request.go:629] Waited for 195.296949ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:39:59.403977   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:39:59.403993   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.404005   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.404014   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.407224   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:59.604375   21811 request.go:629] Waited for 196.447688ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:59.604450   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:39:59.604456   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.604464   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.604469   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.607767   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:39:59.804574   21811 request.go:629] Waited for 95.276273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:39:59.804625   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:39:59.804630   21811 round_trippers.go:469] Request Headers:
	I0610 10:39:59.804637   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:39:59.804641   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:39:59.808860   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:40:00.004655   21811 request.go:629] Waited for 194.884512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.004735   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.004745   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:00.004753   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:00.004759   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:00.008608   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:00.209433   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:40:00.209460   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:00.209473   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:00.209478   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:00.212363   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:00.404476   21811 request.go:629] Waited for 191.368366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.404538   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.404546   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:00.404557   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:00.404572   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:00.408429   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:00.709062   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:40:00.709082   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:00.709091   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:00.709094   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:00.712283   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:00.804349   21811 request.go:629] Waited for 91.269028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.804401   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:00.804407   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:00.804414   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:00.804422   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:00.807309   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:01.209274   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:40:01.209295   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:01.209302   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:01.209306   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:01.212931   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:01.213927   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:01.213941   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:01.213947   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:01.213950   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:01.216958   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:01.217469   21811 pod_ready.go:102] pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace has status "Ready":"False"
	I0610 10:40:01.709420   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:40:01.709442   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:01.709452   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:01.709458   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:01.712358   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:01.713157   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:01.713175   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:01.713191   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:01.713201   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:01.716001   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:02.208854   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:40:02.208883   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.208895   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.208899   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.211845   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:02.212611   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:02.212630   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.212640   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.212645   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.215148   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:02.215545   21811 pod_ready.go:92] pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:02.215561   21811 pod_ready.go:81] duration metric: took 3.007059008s for pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:02.215570   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:02.215630   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925
	I0610 10:40:02.215640   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.215647   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.215652   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.218200   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:02.404194   21811 request.go:629] Waited for 185.334966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:02.404258   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:02.404266   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.404276   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.404283   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.407282   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:02.407833   21811 pod_ready.go:92] pod "kube-controller-manager-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:02.407851   21811 pod_ready.go:81] duration metric: took 192.275745ms for pod "kube-controller-manager-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:02.407862   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:02.604340   21811 request.go:629] Waited for 196.400035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m02
	I0610 10:40:02.604408   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m02
	I0610 10:40:02.604415   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.604426   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.604432   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.607940   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:02.803877   21811 request.go:629] Waited for 195.344559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:02.803932   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:02.803936   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:02.803949   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:02.803954   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:02.807838   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:02.808698   21811 pod_ready.go:92] pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:02.808721   21811 pod_ready.go:81] duration metric: took 400.852342ms for pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:02.808734   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbgnx" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:03.004936   21811 request.go:629] Waited for 196.135591ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbgnx
	I0610 10:40:03.005038   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbgnx
	I0610 10:40:03.005045   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:03.005051   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:03.005055   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:03.008304   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:03.204351   21811 request.go:629] Waited for 195.385662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:03.204425   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:03.204435   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:03.204450   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:03.204463   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:03.208001   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:03.208531   21811 pod_ready.go:92] pod "kube-proxy-vbgnx" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:03.208557   21811 pod_ready.go:81] duration metric: took 399.814343ms for pod "kube-proxy-vbgnx" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:03.208580   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wdjhn" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:03.404626   21811 request.go:629] Waited for 195.970662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wdjhn
	I0610 10:40:03.404691   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wdjhn
	I0610 10:40:03.404696   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:03.404703   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:03.404706   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:03.408644   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:03.604761   21811 request.go:629] Waited for 195.395719ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:03.604837   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:03.604847   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:03.604880   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:03.604892   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:03.607908   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:03.608556   21811 pod_ready.go:92] pod "kube-proxy-wdjhn" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:03.608574   21811 pod_ready.go:81] duration metric: took 399.981689ms for pod "kube-proxy-wdjhn" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:03.608584   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:03.804812   21811 request.go:629] Waited for 196.151277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925
	I0610 10:40:03.804886   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925
	I0610 10:40:03.804893   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:03.804903   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:03.804911   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:03.808282   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:04.004256   21811 request.go:629] Waited for 195.367711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:04.004336   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:40:04.004344   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.004356   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.004364   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.007931   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:04.008517   21811 pod_ready.go:92] pod "kube-scheduler-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:04.008536   21811 pod_ready.go:81] duration metric: took 399.94677ms for pod "kube-scheduler-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:04.008545   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:04.204678   21811 request.go:629] Waited for 196.065911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m02
	I0610 10:40:04.204750   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m02
	I0610 10:40:04.204756   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.204771   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.204777   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.208588   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:04.404776   21811 request.go:629] Waited for 195.352353ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:04.404851   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:40:04.404861   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.404877   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.404890   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.407808   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:40:04.408407   21811 pod_ready.go:92] pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:40:04.408426   21811 pod_ready.go:81] duration metric: took 399.874222ms for pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:40:04.408440   21811 pod_ready.go:38] duration metric: took 6.400239578s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:40:04.408457   21811 api_server.go:52] waiting for apiserver process to appear ...
	I0610 10:40:04.408515   21811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:40:04.426888   21811 api_server.go:72] duration metric: took 14.262012429s to wait for apiserver process to appear ...
	I0610 10:40:04.426915   21811 api_server.go:88] waiting for apiserver healthz status ...
	I0610 10:40:04.426959   21811 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0610 10:40:04.431265   21811 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I0610 10:40:04.431340   21811 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I0610 10:40:04.431351   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.431361   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.431369   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.432338   21811 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 10:40:04.432479   21811 api_server.go:141] control plane version: v1.30.1
	I0610 10:40:04.432501   21811 api_server.go:131] duration metric: took 5.579091ms to wait for apiserver health ...
	I0610 10:40:04.432511   21811 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 10:40:04.603986   21811 request.go:629] Waited for 171.407019ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:40:04.604055   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:40:04.604066   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.604078   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.604113   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.610843   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:40:04.616905   21811 system_pods.go:59] 17 kube-system pods found
	I0610 10:40:04.616943   21811 system_pods.go:61] "coredns-7db6d8ff4d-545cf" [7564efde-b96c-48b3-b194-bca695f7ae95] Running
	I0610 10:40:04.616961   21811 system_pods.go:61] "coredns-7db6d8ff4d-wn6nh" [9e47f047-e98b-48c8-8a33-8f790a3e8017] Running
	I0610 10:40:04.616968   21811 system_pods.go:61] "etcd-ha-565925" [527cd8fc-9ac8-4432-a265-910957e9268f] Running
	I0610 10:40:04.616973   21811 system_pods.go:61] "etcd-ha-565925-m02" [7068fe45-72fe-4204-8742-d8803e585954] Running
	I0610 10:40:04.616978   21811 system_pods.go:61] "kindnet-9jv7q" [2f97ff84-bae1-4e63-9e9a-08e9e7afe68b] Running
	I0610 10:40:04.616983   21811 system_pods.go:61] "kindnet-rnn59" [9141e131-eebc-4f51-8b55-46ff649ffaee] Running
	I0610 10:40:04.616989   21811 system_pods.go:61] "kube-apiserver-ha-565925" [75b7b060-85f2-45ca-a58e-a42a8c2d7fab] Running
	I0610 10:40:04.616994   21811 system_pods.go:61] "kube-apiserver-ha-565925-m02" [a7e4eed5-4ada-4063-a8e1-f82ed820f684] Running
	I0610 10:40:04.617003   21811 system_pods.go:61] "kube-controller-manager-ha-565925" [cd41ddc9-22af-4789-a9ea-3663a5de415b] Running
	I0610 10:40:04.617009   21811 system_pods.go:61] "kube-controller-manager-ha-565925-m02" [6b2d5860-4e09-4eeb-a9e3-24952ec3fab4] Running
	I0610 10:40:04.617015   21811 system_pods.go:61] "kube-proxy-vbgnx" [f43735ae-adc0-4fe4-992e-b640b52886d7] Running
	I0610 10:40:04.617020   21811 system_pods.go:61] "kube-proxy-wdjhn" [da3ac11b-0906-4695-80b1-f3f4f1a34de1] Running
	I0610 10:40:04.617029   21811 system_pods.go:61] "kube-scheduler-ha-565925" [74663e0a-7f9e-4211-b165-39358cb3b3e2] Running
	I0610 10:40:04.617036   21811 system_pods.go:61] "kube-scheduler-ha-565925-m02" [745d6073-f0af-4aa5-9345-38c9b88dad69] Running
	I0610 10:40:04.617044   21811 system_pods.go:61] "kube-vip-ha-565925" [039ffa3e-aac6-4bdc-a576-0158c7fb283d] Running
	I0610 10:40:04.617049   21811 system_pods.go:61] "kube-vip-ha-565925-m02" [f28be16a-38b2-4746-8b18-ab0014783aad] Running
	I0610 10:40:04.617055   21811 system_pods.go:61] "storage-provisioner" [0ca60a36-c445-4520-b857-7df39dfed848] Running
	I0610 10:40:04.617063   21811 system_pods.go:74] duration metric: took 184.546241ms to wait for pod list to return data ...
	I0610 10:40:04.617098   21811 default_sa.go:34] waiting for default service account to be created ...
	I0610 10:40:04.804530   21811 request.go:629] Waited for 187.351129ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I0610 10:40:04.804582   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I0610 10:40:04.804587   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:04.804594   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:04.804598   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:04.808093   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:04.808307   21811 default_sa.go:45] found service account: "default"
	I0610 10:40:04.808326   21811 default_sa.go:55] duration metric: took 191.214996ms for default service account to be created ...
	I0610 10:40:04.808337   21811 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 10:40:05.004375   21811 request.go:629] Waited for 195.968568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:40:05.004450   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:40:05.004456   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:05.004471   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:05.004482   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:05.011392   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:40:05.019047   21811 system_pods.go:86] 17 kube-system pods found
	I0610 10:40:05.019082   21811 system_pods.go:89] "coredns-7db6d8ff4d-545cf" [7564efde-b96c-48b3-b194-bca695f7ae95] Running
	I0610 10:40:05.019089   21811 system_pods.go:89] "coredns-7db6d8ff4d-wn6nh" [9e47f047-e98b-48c8-8a33-8f790a3e8017] Running
	I0610 10:40:05.019094   21811 system_pods.go:89] "etcd-ha-565925" [527cd8fc-9ac8-4432-a265-910957e9268f] Running
	I0610 10:40:05.019099   21811 system_pods.go:89] "etcd-ha-565925-m02" [7068fe45-72fe-4204-8742-d8803e585954] Running
	I0610 10:40:05.019103   21811 system_pods.go:89] "kindnet-9jv7q" [2f97ff84-bae1-4e63-9e9a-08e9e7afe68b] Running
	I0610 10:40:05.019107   21811 system_pods.go:89] "kindnet-rnn59" [9141e131-eebc-4f51-8b55-46ff649ffaee] Running
	I0610 10:40:05.019112   21811 system_pods.go:89] "kube-apiserver-ha-565925" [75b7b060-85f2-45ca-a58e-a42a8c2d7fab] Running
	I0610 10:40:05.019116   21811 system_pods.go:89] "kube-apiserver-ha-565925-m02" [a7e4eed5-4ada-4063-a8e1-f82ed820f684] Running
	I0610 10:40:05.019122   21811 system_pods.go:89] "kube-controller-manager-ha-565925" [cd41ddc9-22af-4789-a9ea-3663a5de415b] Running
	I0610 10:40:05.019127   21811 system_pods.go:89] "kube-controller-manager-ha-565925-m02" [6b2d5860-4e09-4eeb-a9e3-24952ec3fab4] Running
	I0610 10:40:05.019135   21811 system_pods.go:89] "kube-proxy-vbgnx" [f43735ae-adc0-4fe4-992e-b640b52886d7] Running
	I0610 10:40:05.019139   21811 system_pods.go:89] "kube-proxy-wdjhn" [da3ac11b-0906-4695-80b1-f3f4f1a34de1] Running
	I0610 10:40:05.019147   21811 system_pods.go:89] "kube-scheduler-ha-565925" [74663e0a-7f9e-4211-b165-39358cb3b3e2] Running
	I0610 10:40:05.019151   21811 system_pods.go:89] "kube-scheduler-ha-565925-m02" [745d6073-f0af-4aa5-9345-38c9b88dad69] Running
	I0610 10:40:05.019157   21811 system_pods.go:89] "kube-vip-ha-565925" [039ffa3e-aac6-4bdc-a576-0158c7fb283d] Running
	I0610 10:40:05.019162   21811 system_pods.go:89] "kube-vip-ha-565925-m02" [f28be16a-38b2-4746-8b18-ab0014783aad] Running
	I0610 10:40:05.019169   21811 system_pods.go:89] "storage-provisioner" [0ca60a36-c445-4520-b857-7df39dfed848] Running
	I0610 10:40:05.019175   21811 system_pods.go:126] duration metric: took 210.833341ms to wait for k8s-apps to be running ...
	I0610 10:40:05.019185   21811 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 10:40:05.019242   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:40:05.036446   21811 system_svc.go:56] duration metric: took 17.251408ms WaitForService to wait for kubelet
	I0610 10:40:05.036475   21811 kubeadm.go:576] duration metric: took 14.871603454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:40:05.036494   21811 node_conditions.go:102] verifying NodePressure condition ...
	I0610 10:40:05.204902   21811 request.go:629] Waited for 168.331352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I0610 10:40:05.205006   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I0610 10:40:05.205018   21811 round_trippers.go:469] Request Headers:
	I0610 10:40:05.205030   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:40:05.205036   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:40:05.208916   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:40:05.209978   21811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:40:05.209999   21811 node_conditions.go:123] node cpu capacity is 2
	I0610 10:40:05.210011   21811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:40:05.210015   21811 node_conditions.go:123] node cpu capacity is 2
	I0610 10:40:05.210020   21811 node_conditions.go:105] duration metric: took 173.520926ms to run NodePressure ...
	I0610 10:40:05.210031   21811 start.go:240] waiting for startup goroutines ...
	I0610 10:40:05.210055   21811 start.go:254] writing updated cluster config ...
	I0610 10:40:05.212059   21811 out.go:177] 
	I0610 10:40:05.213524   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:40:05.213649   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:40:05.215403   21811 out.go:177] * Starting "ha-565925-m03" control-plane node in "ha-565925" cluster
	I0610 10:40:05.216640   21811 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:40:05.216669   21811 cache.go:56] Caching tarball of preloaded images
	I0610 10:40:05.216787   21811 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:40:05.216803   21811 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:40:05.216923   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:40:05.217116   21811 start.go:360] acquireMachinesLock for ha-565925-m03: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:40:05.217156   21811 start.go:364] duration metric: took 21.755µs to acquireMachinesLock for "ha-565925-m03"
	I0610 10:40:05.217172   21811 start.go:93] Provisioning new machine with config: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:40:05.217266   21811 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0610 10:40:05.218898   21811 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 10:40:05.218992   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:40:05.219026   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:40:05.233379   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38855
	I0610 10:40:05.233799   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:40:05.234277   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:40:05.234301   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:40:05.234703   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:40:05.234895   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetMachineName
	I0610 10:40:05.235088   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:05.235242   21811 start.go:159] libmachine.API.Create for "ha-565925" (driver="kvm2")
	I0610 10:40:05.235271   21811 client.go:168] LocalClient.Create starting
	I0610 10:40:05.235310   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 10:40:05.235350   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:40:05.235370   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:40:05.235432   21811 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 10:40:05.235459   21811 main.go:141] libmachine: Decoding PEM data...
	I0610 10:40:05.235475   21811 main.go:141] libmachine: Parsing certificate...
	I0610 10:40:05.235502   21811 main.go:141] libmachine: Running pre-create checks...
	I0610 10:40:05.235513   21811 main.go:141] libmachine: (ha-565925-m03) Calling .PreCreateCheck
	I0610 10:40:05.235682   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetConfigRaw
	I0610 10:40:05.236048   21811 main.go:141] libmachine: Creating machine...
	I0610 10:40:05.236059   21811 main.go:141] libmachine: (ha-565925-m03) Calling .Create
	I0610 10:40:05.236219   21811 main.go:141] libmachine: (ha-565925-m03) Creating KVM machine...
	I0610 10:40:05.237677   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found existing default KVM network
	I0610 10:40:05.237786   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found existing private KVM network mk-ha-565925
	I0610 10:40:05.237946   21811 main.go:141] libmachine: (ha-565925-m03) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03 ...
	I0610 10:40:05.237977   21811 main.go:141] libmachine: (ha-565925-m03) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 10:40:05.238006   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:05.237910   22654 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:40:05.238081   21811 main.go:141] libmachine: (ha-565925-m03) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 10:40:05.460882   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:05.460758   22654 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa...
	I0610 10:40:05.512643   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:05.512536   22654 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/ha-565925-m03.rawdisk...
	I0610 10:40:05.512673   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Writing magic tar header
	I0610 10:40:05.512683   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Writing SSH key tar header
	I0610 10:40:05.512692   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:05.512643   22654 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03 ...
	I0610 10:40:05.512823   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03
	I0610 10:40:05.512846   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03 (perms=drwx------)
	I0610 10:40:05.512858   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 10:40:05.512871   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:40:05.512885   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 10:40:05.512899   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 10:40:05.512910   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 10:40:05.512917   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 10:40:05.512927   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 10:40:05.512933   21811 main.go:141] libmachine: (ha-565925-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 10:40:05.512940   21811 main.go:141] libmachine: (ha-565925-m03) Creating domain...
	I0610 10:40:05.513015   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 10:40:05.513047   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home/jenkins
	I0610 10:40:05.513063   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Checking permissions on dir: /home
	I0610 10:40:05.513076   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Skipping /home - not owner
	I0610 10:40:05.513953   21811 main.go:141] libmachine: (ha-565925-m03) define libvirt domain using xml: 
	I0610 10:40:05.513972   21811 main.go:141] libmachine: (ha-565925-m03) <domain type='kvm'>
	I0610 10:40:05.513982   21811 main.go:141] libmachine: (ha-565925-m03)   <name>ha-565925-m03</name>
	I0610 10:40:05.513990   21811 main.go:141] libmachine: (ha-565925-m03)   <memory unit='MiB'>2200</memory>
	I0610 10:40:05.514000   21811 main.go:141] libmachine: (ha-565925-m03)   <vcpu>2</vcpu>
	I0610 10:40:05.514010   21811 main.go:141] libmachine: (ha-565925-m03)   <features>
	I0610 10:40:05.514021   21811 main.go:141] libmachine: (ha-565925-m03)     <acpi/>
	I0610 10:40:05.514031   21811 main.go:141] libmachine: (ha-565925-m03)     <apic/>
	I0610 10:40:05.514042   21811 main.go:141] libmachine: (ha-565925-m03)     <pae/>
	I0610 10:40:05.514057   21811 main.go:141] libmachine: (ha-565925-m03)     
	I0610 10:40:05.514070   21811 main.go:141] libmachine: (ha-565925-m03)   </features>
	I0610 10:40:05.514087   21811 main.go:141] libmachine: (ha-565925-m03)   <cpu mode='host-passthrough'>
	I0610 10:40:05.514110   21811 main.go:141] libmachine: (ha-565925-m03)   
	I0610 10:40:05.514116   21811 main.go:141] libmachine: (ha-565925-m03)   </cpu>
	I0610 10:40:05.514130   21811 main.go:141] libmachine: (ha-565925-m03)   <os>
	I0610 10:40:05.514138   21811 main.go:141] libmachine: (ha-565925-m03)     <type>hvm</type>
	I0610 10:40:05.514147   21811 main.go:141] libmachine: (ha-565925-m03)     <boot dev='cdrom'/>
	I0610 10:40:05.514155   21811 main.go:141] libmachine: (ha-565925-m03)     <boot dev='hd'/>
	I0610 10:40:05.514164   21811 main.go:141] libmachine: (ha-565925-m03)     <bootmenu enable='no'/>
	I0610 10:40:05.514178   21811 main.go:141] libmachine: (ha-565925-m03)   </os>
	I0610 10:40:05.514214   21811 main.go:141] libmachine: (ha-565925-m03)   <devices>
	I0610 10:40:05.514240   21811 main.go:141] libmachine: (ha-565925-m03)     <disk type='file' device='cdrom'>
	I0610 10:40:05.514260   21811 main.go:141] libmachine: (ha-565925-m03)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/boot2docker.iso'/>
	I0610 10:40:05.514272   21811 main.go:141] libmachine: (ha-565925-m03)       <target dev='hdc' bus='scsi'/>
	I0610 10:40:05.514286   21811 main.go:141] libmachine: (ha-565925-m03)       <readonly/>
	I0610 10:40:05.514297   21811 main.go:141] libmachine: (ha-565925-m03)     </disk>
	I0610 10:40:05.514309   21811 main.go:141] libmachine: (ha-565925-m03)     <disk type='file' device='disk'>
	I0610 10:40:05.514333   21811 main.go:141] libmachine: (ha-565925-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 10:40:05.514354   21811 main.go:141] libmachine: (ha-565925-m03)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/ha-565925-m03.rawdisk'/>
	I0610 10:40:05.514367   21811 main.go:141] libmachine: (ha-565925-m03)       <target dev='hda' bus='virtio'/>
	I0610 10:40:05.514379   21811 main.go:141] libmachine: (ha-565925-m03)     </disk>
	I0610 10:40:05.514391   21811 main.go:141] libmachine: (ha-565925-m03)     <interface type='network'>
	I0610 10:40:05.514421   21811 main.go:141] libmachine: (ha-565925-m03)       <source network='mk-ha-565925'/>
	I0610 10:40:05.514443   21811 main.go:141] libmachine: (ha-565925-m03)       <model type='virtio'/>
	I0610 10:40:05.514456   21811 main.go:141] libmachine: (ha-565925-m03)     </interface>
	I0610 10:40:05.514468   21811 main.go:141] libmachine: (ha-565925-m03)     <interface type='network'>
	I0610 10:40:05.514480   21811 main.go:141] libmachine: (ha-565925-m03)       <source network='default'/>
	I0610 10:40:05.514491   21811 main.go:141] libmachine: (ha-565925-m03)       <model type='virtio'/>
	I0610 10:40:05.514505   21811 main.go:141] libmachine: (ha-565925-m03)     </interface>
	I0610 10:40:05.514515   21811 main.go:141] libmachine: (ha-565925-m03)     <serial type='pty'>
	I0610 10:40:05.514526   21811 main.go:141] libmachine: (ha-565925-m03)       <target port='0'/>
	I0610 10:40:05.514545   21811 main.go:141] libmachine: (ha-565925-m03)     </serial>
	I0610 10:40:05.514562   21811 main.go:141] libmachine: (ha-565925-m03)     <console type='pty'>
	I0610 10:40:05.514573   21811 main.go:141] libmachine: (ha-565925-m03)       <target type='serial' port='0'/>
	I0610 10:40:05.514585   21811 main.go:141] libmachine: (ha-565925-m03)     </console>
	I0610 10:40:05.514599   21811 main.go:141] libmachine: (ha-565925-m03)     <rng model='virtio'>
	I0610 10:40:05.514611   21811 main.go:141] libmachine: (ha-565925-m03)       <backend model='random'>/dev/random</backend>
	I0610 10:40:05.514623   21811 main.go:141] libmachine: (ha-565925-m03)     </rng>
	I0610 10:40:05.514638   21811 main.go:141] libmachine: (ha-565925-m03)     
	I0610 10:40:05.514649   21811 main.go:141] libmachine: (ha-565925-m03)     
	I0610 10:40:05.514657   21811 main.go:141] libmachine: (ha-565925-m03)   </devices>
	I0610 10:40:05.514669   21811 main.go:141] libmachine: (ha-565925-m03) </domain>
	I0610 10:40:05.514680   21811 main.go:141] libmachine: (ha-565925-m03) 
	I0610 10:40:05.521327   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:2e:39:d5 in network default
	I0610 10:40:05.521938   21811 main.go:141] libmachine: (ha-565925-m03) Ensuring networks are active...
	I0610 10:40:05.521960   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:05.522743   21811 main.go:141] libmachine: (ha-565925-m03) Ensuring network default is active
	I0610 10:40:05.523100   21811 main.go:141] libmachine: (ha-565925-m03) Ensuring network mk-ha-565925 is active
	I0610 10:40:05.523540   21811 main.go:141] libmachine: (ha-565925-m03) Getting domain xml...
	I0610 10:40:05.524230   21811 main.go:141] libmachine: (ha-565925-m03) Creating domain...
	I0610 10:40:06.740424   21811 main.go:141] libmachine: (ha-565925-m03) Waiting to get IP...
	I0610 10:40:06.741319   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:06.741844   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:06.741868   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:06.741821   22654 retry.go:31] will retry after 311.64489ms: waiting for machine to come up
	I0610 10:40:07.055182   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:07.055696   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:07.055721   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:07.055648   22654 retry.go:31] will retry after 333.608993ms: waiting for machine to come up
	I0610 10:40:07.391058   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:07.391414   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:07.391439   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:07.391363   22654 retry.go:31] will retry after 429.022376ms: waiting for machine to come up
	I0610 10:40:07.822069   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:07.822478   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:07.822506   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:07.822431   22654 retry.go:31] will retry after 592.938721ms: waiting for machine to come up
	I0610 10:40:08.417392   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:08.417873   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:08.417902   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:08.417827   22654 retry.go:31] will retry after 629.38733ms: waiting for machine to come up
	I0610 10:40:09.049096   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:09.049554   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:09.049582   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:09.049513   22654 retry.go:31] will retry after 832.669925ms: waiting for machine to come up
	I0610 10:40:09.883539   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:09.884032   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:09.884063   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:09.883974   22654 retry.go:31] will retry after 829.939129ms: waiting for machine to come up
	I0610 10:40:10.715792   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:10.716263   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:10.716287   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:10.716226   22654 retry.go:31] will retry after 1.361129244s: waiting for machine to come up
	I0610 10:40:12.079856   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:12.080406   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:12.080433   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:12.080347   22654 retry.go:31] will retry after 1.717364358s: waiting for machine to come up
	I0610 10:40:13.800411   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:13.800943   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:13.800997   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:13.800898   22654 retry.go:31] will retry after 1.606518953s: waiting for machine to come up
	I0610 10:40:15.409197   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:15.409597   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:15.409621   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:15.409569   22654 retry.go:31] will retry after 1.751158033s: waiting for machine to come up
	I0610 10:40:17.162011   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:17.162609   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:17.162634   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:17.162572   22654 retry.go:31] will retry after 2.822466845s: waiting for machine to come up
	I0610 10:40:19.986284   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:19.986865   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:19.986907   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:19.986753   22654 retry.go:31] will retry after 3.077885171s: waiting for machine to come up
	I0610 10:40:23.066029   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:23.066407   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find current IP address of domain ha-565925-m03 in network mk-ha-565925
	I0610 10:40:23.066440   21811 main.go:141] libmachine: (ha-565925-m03) DBG | I0610 10:40:23.066379   22654 retry.go:31] will retry after 4.747341484s: waiting for machine to come up
	I0610 10:40:27.814983   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:27.815592   21811 main.go:141] libmachine: (ha-565925-m03) Found IP for machine: 192.168.39.76
	I0610 10:40:27.815635   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has current primary IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:27.815644   21811 main.go:141] libmachine: (ha-565925-m03) Reserving static IP address...
	I0610 10:40:27.816011   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find host DHCP lease matching {name: "ha-565925-m03", mac: "52:54:00:cf:67:38", ip: "192.168.39.76"} in network mk-ha-565925
	I0610 10:40:27.891235   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Getting to WaitForSSH function...
	I0610 10:40:27.891266   21811 main.go:141] libmachine: (ha-565925-m03) Reserved static IP address: 192.168.39.76
	I0610 10:40:27.891284   21811 main.go:141] libmachine: (ha-565925-m03) Waiting for SSH to be available...
	I0610 10:40:27.893996   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:27.894530   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925
	I0610 10:40:27.894556   21811 main.go:141] libmachine: (ha-565925-m03) DBG | unable to find defined IP address of network mk-ha-565925 interface with MAC address 52:54:00:cf:67:38
	I0610 10:40:27.894789   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Using SSH client type: external
	I0610 10:40:27.894816   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa (-rw-------)
	I0610 10:40:27.894846   21811 main.go:141] libmachine: (ha-565925-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:40:27.894863   21811 main.go:141] libmachine: (ha-565925-m03) DBG | About to run SSH command:
	I0610 10:40:27.894879   21811 main.go:141] libmachine: (ha-565925-m03) DBG | exit 0
	I0610 10:40:27.898815   21811 main.go:141] libmachine: (ha-565925-m03) DBG | SSH cmd err, output: exit status 255: 
	I0610 10:40:27.898831   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0610 10:40:27.898839   21811 main.go:141] libmachine: (ha-565925-m03) DBG | command : exit 0
	I0610 10:40:27.898850   21811 main.go:141] libmachine: (ha-565925-m03) DBG | err     : exit status 255
	I0610 10:40:27.898864   21811 main.go:141] libmachine: (ha-565925-m03) DBG | output  : 
	I0610 10:40:30.899345   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Getting to WaitForSSH function...
	I0610 10:40:30.902473   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:30.902956   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:30.902978   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:30.903221   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Using SSH client type: external
	I0610 10:40:30.903238   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa (-rw-------)
	I0610 10:40:30.903269   21811 main.go:141] libmachine: (ha-565925-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.76 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 10:40:30.903288   21811 main.go:141] libmachine: (ha-565925-m03) DBG | About to run SSH command:
	I0610 10:40:30.903304   21811 main.go:141] libmachine: (ha-565925-m03) DBG | exit 0
	I0610 10:40:31.026097   21811 main.go:141] libmachine: (ha-565925-m03) DBG | SSH cmd err, output: <nil>: 
	I0610 10:40:31.026393   21811 main.go:141] libmachine: (ha-565925-m03) KVM machine creation complete!
	I0610 10:40:31.026699   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetConfigRaw
	I0610 10:40:31.027355   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:31.027545   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:31.027714   21811 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 10:40:31.027730   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:40:31.028934   21811 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 10:40:31.028980   21811 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 10:40:31.029000   21811 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 10:40:31.029009   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.031448   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.031891   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.031918   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.032059   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.032242   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.032405   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.032554   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.032723   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:31.032930   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:31.032975   21811 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 10:40:31.132144   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:40:31.132179   21811 main.go:141] libmachine: Detecting the provisioner...
	I0610 10:40:31.132187   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.134873   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.135271   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.135296   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.135471   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.135664   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.135805   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.136004   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.136185   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:31.136375   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:31.136387   21811 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 10:40:31.233651   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 10:40:31.233714   21811 main.go:141] libmachine: found compatible host: buildroot
	I0610 10:40:31.233723   21811 main.go:141] libmachine: Provisioning with buildroot...
	I0610 10:40:31.233729   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetMachineName
	I0610 10:40:31.234006   21811 buildroot.go:166] provisioning hostname "ha-565925-m03"
	I0610 10:40:31.234030   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetMachineName
	I0610 10:40:31.234209   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.236834   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.237210   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.237247   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.237407   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.237594   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.237872   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.238052   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.238228   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:31.238430   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:31.238446   21811 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925-m03 && echo "ha-565925-m03" | sudo tee /etc/hostname
	I0610 10:40:31.350884   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925-m03
	
	I0610 10:40:31.350907   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.353726   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.354160   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.354182   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.354412   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.354603   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.354783   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.354949   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.355123   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:31.355327   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:31.355350   21811 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:40:31.465909   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:40:31.465939   21811 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:40:31.465953   21811 buildroot.go:174] setting up certificates
	I0610 10:40:31.465961   21811 provision.go:84] configureAuth start
	I0610 10:40:31.465968   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetMachineName
	I0610 10:40:31.466250   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:40:31.468714   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.469095   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.469120   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.469309   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.471382   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.471712   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.471743   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.471880   21811 provision.go:143] copyHostCerts
	I0610 10:40:31.471909   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:40:31.471949   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 10:40:31.471961   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:40:31.472043   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:40:31.472135   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:40:31.472160   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 10:40:31.472179   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:40:31.472224   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:40:31.472286   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:40:31.472308   21811 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 10:40:31.472315   21811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:40:31.472354   21811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:40:31.472424   21811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925-m03 san=[127.0.0.1 192.168.39.76 ha-565925-m03 localhost minikube]
	I0610 10:40:31.735807   21811 provision.go:177] copyRemoteCerts
	I0610 10:40:31.735855   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:40:31.735876   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.738723   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.739067   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.739095   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.739258   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.739451   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.739638   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.739770   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:40:31.822436   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 10:40:31.822499   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:40:31.846296   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 10:40:31.846353   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 10:40:31.869575   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 10:40:31.869667   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 10:40:31.892496   21811 provision.go:87] duration metric: took 426.521202ms to configureAuth
	I0610 10:40:31.892530   21811 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:40:31.892761   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:40:31.892826   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:31.895916   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.896439   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:31.896465   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:31.896683   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:31.896872   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.897023   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:31.897159   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:31.897295   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:31.897443   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:31.897457   21811 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:40:32.146262   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:40:32.146294   21811 main.go:141] libmachine: Checking connection to Docker...
	I0610 10:40:32.146304   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetURL
	I0610 10:40:32.147674   21811 main.go:141] libmachine: (ha-565925-m03) DBG | Using libvirt version 6000000
	I0610 10:40:32.150109   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.150508   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.150538   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.150671   21811 main.go:141] libmachine: Docker is up and running!
	I0610 10:40:32.150689   21811 main.go:141] libmachine: Reticulating splines...
	I0610 10:40:32.150697   21811 client.go:171] duration metric: took 26.915416102s to LocalClient.Create
	I0610 10:40:32.150723   21811 start.go:167] duration metric: took 26.915480978s to libmachine.API.Create "ha-565925"
	I0610 10:40:32.150735   21811 start.go:293] postStartSetup for "ha-565925-m03" (driver="kvm2")
	I0610 10:40:32.150746   21811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:40:32.150773   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:32.151027   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:40:32.151058   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:32.153169   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.153458   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.153478   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.153603   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:32.153773   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:32.153971   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:32.154128   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:40:32.230935   21811 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:40:32.234722   21811 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:40:32.234745   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:40:32.234812   21811 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:40:32.234894   21811 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 10:40:32.234906   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 10:40:32.235015   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 10:40:32.244311   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:40:32.269943   21811 start.go:296] duration metric: took 119.190727ms for postStartSetup
	I0610 10:40:32.269984   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetConfigRaw
	I0610 10:40:32.270553   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:40:32.273049   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.273478   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.273503   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.273761   21811 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:40:32.273948   21811 start.go:128] duration metric: took 27.056671199s to createHost
	I0610 10:40:32.273970   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:32.275856   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.276263   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.276285   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.276443   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:32.276614   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:32.276782   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:32.276971   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:32.277203   21811 main.go:141] libmachine: Using SSH client type: native
	I0610 10:40:32.277356   21811 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.76 22 <nil> <nil>}
	I0610 10:40:32.277369   21811 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:40:32.373481   21811 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718016032.353629638
	
	I0610 10:40:32.373505   21811 fix.go:216] guest clock: 1718016032.353629638
	I0610 10:40:32.373513   21811 fix.go:229] Guest: 2024-06-10 10:40:32.353629638 +0000 UTC Remote: 2024-06-10 10:40:32.273959511 +0000 UTC m=+161.060046086 (delta=79.670127ms)
	I0610 10:40:32.373530   21811 fix.go:200] guest clock delta is within tolerance: 79.670127ms
	I0610 10:40:32.373537   21811 start.go:83] releasing machines lock for "ha-565925-m03", held for 27.156372466s
	I0610 10:40:32.373560   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:32.373858   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:40:32.376677   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.377089   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.377120   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.379462   21811 out.go:177] * Found network options:
	I0610 10:40:32.380859   21811 out.go:177]   - NO_PROXY=192.168.39.208,192.168.39.230
	W0610 10:40:32.382020   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 10:40:32.382052   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 10:40:32.382065   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:32.382567   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:32.382781   21811 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:40:32.382883   21811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:40:32.382921   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	W0610 10:40:32.382997   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	W0610 10:40:32.383026   21811 proxy.go:119] fail to check proxy env: Error ip not in block
	I0610 10:40:32.383079   21811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:40:32.383102   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:40:32.385756   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.386850   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.386886   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.387337   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:32.387373   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.387398   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:32.387555   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:32.387648   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:40:32.387726   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:32.387797   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:40:32.387858   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:32.387957   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:40:32.388038   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:40:32.388114   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:40:32.620387   21811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:40:32.626506   21811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:40:32.626584   21811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:40:32.644521   21811 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 10:40:32.644548   21811 start.go:494] detecting cgroup driver to use...
	I0610 10:40:32.644618   21811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:40:32.660410   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:40:32.673631   21811 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:40:32.673681   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:40:32.687825   21811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:40:32.702644   21811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:40:32.822310   21811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:40:32.961150   21811 docker.go:233] disabling docker service ...
	I0610 10:40:32.961243   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:40:32.975285   21811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:40:32.987979   21811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:40:33.128167   21811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:40:33.255549   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:40:33.268974   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:40:33.286308   21811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:40:33.286375   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.297044   21811 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:40:33.297119   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.307368   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.318217   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.328550   21811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:40:33.339085   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.349165   21811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.365797   21811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:40:33.375766   21811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:40:33.384682   21811 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 10:40:33.384739   21811 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 10:40:33.398360   21811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:40:33.407882   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:40:33.525781   21811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:40:33.675216   21811 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:40:33.675278   21811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:40:33.680297   21811 start.go:562] Will wait 60s for crictl version
	I0610 10:40:33.680354   21811 ssh_runner.go:195] Run: which crictl
	I0610 10:40:33.684191   21811 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:40:33.724690   21811 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:40:33.724754   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:40:33.758087   21811 ssh_runner.go:195] Run: crio --version
	I0610 10:40:33.791645   21811 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:40:33.793142   21811 out.go:177]   - env NO_PROXY=192.168.39.208
	I0610 10:40:33.794452   21811 out.go:177]   - env NO_PROXY=192.168.39.208,192.168.39.230
	I0610 10:40:33.795713   21811 main.go:141] libmachine: (ha-565925-m03) Calling .GetIP
	I0610 10:40:33.798904   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:33.799413   21811 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:40:33.799444   21811 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:40:33.799634   21811 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:40:33.803804   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:40:33.815317   21811 mustload.go:65] Loading cluster: ha-565925
	I0610 10:40:33.815593   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:40:33.815844   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:40:33.815883   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:40:33.830974   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44969
	I0610 10:40:33.831407   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:40:33.831916   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:40:33.831936   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:40:33.832243   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:40:33.832446   21811 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:40:33.834077   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:40:33.834356   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:40:33.834394   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:40:33.849334   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37657
	I0610 10:40:33.849815   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:40:33.850272   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:40:33.850296   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:40:33.850612   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:40:33.850814   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:40:33.850997   21811 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.76
	I0610 10:40:33.851011   21811 certs.go:194] generating shared ca certs ...
	I0610 10:40:33.851029   21811 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:40:33.851175   21811 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:40:33.851237   21811 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:40:33.851250   21811 certs.go:256] generating profile certs ...
	I0610 10:40:33.851325   21811 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 10:40:33.851351   21811 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.512d8c09
	I0610 10:40:33.851364   21811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.512d8c09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.230 192.168.39.76 192.168.39.254]
	I0610 10:40:33.925414   21811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.512d8c09 ...
	I0610 10:40:33.925443   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.512d8c09: {Name:mkae780a0d2dbc4ec4fdafac1ace76b0fd2d0fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:40:33.925607   21811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.512d8c09 ...
	I0610 10:40:33.925619   21811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.512d8c09: {Name:mk6129f5d875915e5790355da934688584ed0ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:40:33.925689   21811 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.512d8c09 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 10:40:33.925812   21811 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.512d8c09 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 10:40:33.925940   21811 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 10:40:33.925959   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:40:33.925979   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:40:33.925995   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:40:33.926014   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:40:33.926032   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:40:33.926050   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:40:33.926068   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:40:33.926086   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:40:33.926144   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 10:40:33.926175   21811 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 10:40:33.926186   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:40:33.926205   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:40:33.926227   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:40:33.926249   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:40:33.926287   21811 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:40:33.926313   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 10:40:33.926326   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:40:33.926338   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 10:40:33.926367   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:40:33.929419   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:40:33.929918   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:40:33.929942   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:40:33.930107   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:40:33.930324   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:40:33.930475   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:40:33.930637   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:40:34.005310   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0610 10:40:34.011309   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0610 10:40:34.022850   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0610 10:40:34.026923   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0610 10:40:34.037843   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0610 10:40:34.041779   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0610 10:40:34.052470   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0610 10:40:34.056818   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0610 10:40:34.067304   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0610 10:40:34.072036   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0610 10:40:34.082439   21811 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0610 10:40:34.087027   21811 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0610 10:40:34.099447   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:40:34.123075   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:40:34.147023   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:40:34.170034   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:40:34.192193   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0610 10:40:34.213773   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 10:40:34.234759   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:40:34.257207   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 10:40:34.279806   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 10:40:34.303155   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:40:34.326009   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 10:40:34.347846   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0610 10:40:34.363438   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0610 10:40:34.379176   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0610 10:40:34.394884   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0610 10:40:34.411721   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0610 10:40:34.427602   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0610 10:40:34.445919   21811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0610 10:40:34.462559   21811 ssh_runner.go:195] Run: openssl version
	I0610 10:40:34.469091   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 10:40:34.480339   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 10:40:34.484773   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 10:40:34.484835   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 10:40:34.490314   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:40:34.500730   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:40:34.511174   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:40:34.515178   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:40:34.515237   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:40:34.520333   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:40:34.530433   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 10:40:34.540090   21811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 10:40:34.544131   21811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 10:40:34.544191   21811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 10:40:34.549491   21811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 10:40:34.558986   21811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:40:34.562931   21811 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 10:40:34.562987   21811 kubeadm.go:928] updating node {m03 192.168.39.76 8443 v1.30.1 crio true true} ...
	I0610 10:40:34.563068   21811 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.76
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:40:34.563092   21811 kube-vip.go:115] generating kube-vip config ...
	I0610 10:40:34.563122   21811 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 10:40:34.577712   21811 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 10:40:34.577772   21811 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 10:40:34.577841   21811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:40:34.586773   21811 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 10:40:34.586835   21811 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 10:40:34.596214   21811 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0610 10:40:34.596233   21811 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0610 10:40:34.596242   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 10:40:34.596255   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 10:40:34.596274   21811 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0610 10:40:34.596309   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 10:40:34.596332   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 10:40:34.596311   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:40:34.605576   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 10:40:34.605612   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 10:40:34.605907   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 10:40:34.605944   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 10:40:34.627908   21811 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 10:40:34.628008   21811 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 10:40:34.730261   21811 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 10:40:34.730305   21811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 10:40:35.509956   21811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0610 10:40:35.520028   21811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0610 10:40:35.536892   21811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:40:35.554633   21811 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 10:40:35.571335   21811 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 10:40:35.575481   21811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 10:40:35.588334   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:40:35.712100   21811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:40:35.729688   21811 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:40:35.730051   21811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:40:35.730103   21811 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:40:35.745864   21811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38503
	I0610 10:40:35.746283   21811 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:40:35.746807   21811 main.go:141] libmachine: Using API Version  1
	I0610 10:40:35.746830   21811 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:40:35.747214   21811 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:40:35.747413   21811 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:40:35.747529   21811 start.go:316] joinCluster: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:40:35.747683   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 10:40:35.747702   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:40:35.750997   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:40:35.751410   21811 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:40:35.751430   21811 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:40:35.751614   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:40:35.751776   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:40:35.751933   21811 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:40:35.752055   21811 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:40:36.030423   21811 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:40:36.030481   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h1yzks.ltnn52dog1u09foz --discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565925-m03 --control-plane --apiserver-advertise-address=192.168.39.76 --apiserver-bind-port=8443"
	I0610 10:40:59.310507   21811 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h1yzks.ltnn52dog1u09foz --discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565925-m03 --control-plane --apiserver-advertise-address=192.168.39.76 --apiserver-bind-port=8443": (23.279996408s)
	I0610 10:40:59.310545   21811 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 10:40:59.862689   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565925-m03 minikube.k8s.io/updated_at=2024_06_10T10_40_59_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-565925 minikube.k8s.io/primary=false
	I0610 10:40:59.991741   21811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565925-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0610 10:41:00.102879   21811 start.go:318] duration metric: took 24.355343976s to joinCluster
	I0610 10:41:00.102952   21811 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 10:41:00.104331   21811 out.go:177] * Verifying Kubernetes components...
	I0610 10:41:00.103248   21811 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:41:00.105592   21811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:41:00.415091   21811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:41:00.451391   21811 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:41:00.451658   21811 kapi.go:59] client config for ha-565925: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt", KeyFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key", CAFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0610 10:41:00.451721   21811 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I0610 10:41:00.451896   21811 node_ready.go:35] waiting up to 6m0s for node "ha-565925-m03" to be "Ready" ...
	I0610 10:41:00.451955   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:00.451963   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:00.451970   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:00.451973   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:00.457416   21811 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:41:00.952872   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:00.952895   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:00.952905   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:00.952914   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:00.956651   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:01.452202   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:01.452234   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:01.452244   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:01.452249   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:01.455691   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:01.952818   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:01.952853   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:01.952875   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:01.952879   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:01.956530   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:02.452074   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:02.452096   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:02.452110   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:02.452115   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:02.455726   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:02.456257   21811 node_ready.go:53] node "ha-565925-m03" has status "Ready":"False"
	I0610 10:41:02.952809   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:02.952880   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:02.952891   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:02.952896   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:02.956516   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:03.452358   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:03.452380   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:03.452388   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:03.452393   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:03.456184   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:03.952483   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:03.952504   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:03.952513   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:03.952519   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:03.956051   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:04.452262   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:04.452284   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:04.452291   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:04.452296   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:04.455788   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:04.952016   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:04.952068   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:04.952079   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:04.952091   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:04.955611   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:04.956246   21811 node_ready.go:53] node "ha-565925-m03" has status "Ready":"False"
	I0610 10:41:05.452534   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:05.452557   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:05.452565   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:05.452568   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:05.456632   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:41:05.952150   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:05.952171   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:05.952179   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:05.952183   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:05.955673   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:06.452594   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:06.452618   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:06.452626   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:06.452630   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:06.455526   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:06.952469   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:06.952493   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:06.952504   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:06.952510   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:06.955666   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:06.956481   21811 node_ready.go:53] node "ha-565925-m03" has status "Ready":"False"
	I0610 10:41:07.452930   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:07.452996   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.453007   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.453013   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.455849   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.456349   21811 node_ready.go:49] node "ha-565925-m03" has status "Ready":"True"
	I0610 10:41:07.456366   21811 node_ready.go:38] duration metric: took 7.004457662s for node "ha-565925-m03" to be "Ready" ...
	I0610 10:41:07.456374   21811 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:41:07.456426   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:41:07.456435   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.456443   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.456448   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.463172   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:41:07.470000   21811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.470075   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 10:41:07.470083   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.470090   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.470096   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.473333   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:07.474159   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:07.474176   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.474186   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.474191   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.476705   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.477267   21811 pod_ready.go:92] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:07.477285   21811 pod_ready.go:81] duration metric: took 7.259942ms for pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.477295   21811 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.477354   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wn6nh
	I0610 10:41:07.477364   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.477373   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.477378   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.482359   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:41:07.482941   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:07.482953   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.482960   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.482964   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.485814   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.486307   21811 pod_ready.go:92] pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:07.486324   21811 pod_ready.go:81] duration metric: took 9.021797ms for pod "coredns-7db6d8ff4d-wn6nh" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.486339   21811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.486403   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925
	I0610 10:41:07.486413   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.486422   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.486429   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.489824   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:07.490287   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:07.490305   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.490315   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.490320   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.492347   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.492861   21811 pod_ready.go:92] pod "etcd-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:07.492877   21811 pod_ready.go:81] duration metric: took 6.531211ms for pod "etcd-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.492888   21811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.492989   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m02
	I0610 10:41:07.493003   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.493013   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.493023   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.495308   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.495958   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:07.495998   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.496026   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.496036   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.498709   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:07.499086   21811 pod_ready.go:92] pod "etcd-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:07.499098   21811 pod_ready.go:81] duration metric: took 6.204218ms for pod "etcd-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.499106   21811 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:07.653468   21811 request.go:629] Waited for 154.307525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:07.653523   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:07.653529   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.653560   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.653569   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.657367   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:07.853472   21811 request.go:629] Waited for 195.469114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:07.853535   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:07.853542   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:07.853553   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:07.853562   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:07.856466   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:08.053541   21811 request.go:629] Waited for 54.246552ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:08.053604   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:08.053610   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:08.053620   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:08.053637   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:08.057135   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:08.253024   21811 request.go:629] Waited for 195.35667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:08.253099   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:08.253108   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:08.253126   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:08.253133   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:08.259607   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:41:08.499397   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:08.499428   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:08.499436   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:08.499439   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:08.502919   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:08.653923   21811 request.go:629] Waited for 150.309174ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:08.653992   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:08.653998   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:08.654005   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:08.654009   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:08.657112   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:08.999902   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:08.999932   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:08.999940   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:08.999944   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:09.002885   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:09.053672   21811 request.go:629] Waited for 50.2193ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:09.053737   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:09.053745   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:09.053759   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:09.053766   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:09.056943   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:09.500130   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:09.500147   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:09.500155   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:09.500160   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:09.503204   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:09.503851   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:09.503866   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:09.503874   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:09.503878   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:09.506739   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:09.507205   21811 pod_ready.go:102] pod "etcd-ha-565925-m03" in "kube-system" namespace has status "Ready":"False"
	I0610 10:41:09.999578   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:09.999600   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:09.999610   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:09.999617   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:10.003109   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:10.003722   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:10.003738   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:10.003745   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:10.003749   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:10.006533   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:10.499652   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:10.499671   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:10.499681   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:10.499688   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:10.503181   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:10.503914   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:10.503929   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:10.503958   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:10.503968   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:10.506488   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:10.999849   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:10.999873   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:10.999884   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:10.999889   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:11.003058   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:11.003757   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:11.003774   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:11.003781   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:11.003784   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:11.006504   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:11.499519   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:11.499541   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:11.499553   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:11.499558   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:11.502468   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:11.503108   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:11.503123   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:11.503133   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:11.503136   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:11.505496   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:11.999588   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:11.999610   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:11.999618   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:11.999622   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:12.002908   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:12.003613   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:12.003627   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:12.003634   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:12.003638   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:12.006896   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:12.007399   21811 pod_ready.go:102] pod "etcd-ha-565925-m03" in "kube-system" namespace has status "Ready":"False"
	I0610 10:41:12.499681   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:12.499702   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:12.499709   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:12.499714   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:12.502539   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:12.503306   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:12.503325   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:12.503335   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:12.503341   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:12.505852   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:13.000227   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:13.000277   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:13.000289   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:13.000296   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:13.003436   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:13.004420   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:13.004438   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:13.004456   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:13.004465   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:13.007317   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:13.500393   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:13.500420   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:13.500430   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:13.500436   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:13.504782   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:41:13.505356   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:13.505371   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:13.505379   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:13.505385   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:13.508006   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:13.999714   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:13.999732   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:13.999741   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:13.999746   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:14.003320   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:14.004114   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:14.004131   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:14.004141   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:14.004145   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:14.007268   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:14.007850   21811 pod_ready.go:102] pod "etcd-ha-565925-m03" in "kube-system" namespace has status "Ready":"False"
	I0610 10:41:14.500166   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:14.500185   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:14.500192   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:14.500194   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:14.506559   21811 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 10:41:14.507357   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:14.507375   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:14.507385   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:14.507390   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:14.509902   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:14.999744   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/etcd-ha-565925-m03
	I0610 10:41:14.999771   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:14.999781   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:14.999786   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.004161   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:41:15.004894   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:15.004910   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.004928   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.004932   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.007751   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:15.008310   21811 pod_ready.go:92] pod "etcd-ha-565925-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:15.008330   21811 pod_ready.go:81] duration metric: took 7.509218371s for pod "etcd-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.008346   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.008408   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925
	I0610 10:41:15.008415   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.008422   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.008429   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.011990   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:15.012993   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:15.013046   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.013060   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.013066   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.020219   21811 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 10:41:15.020787   21811 pod_ready.go:92] pod "kube-apiserver-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:15.020808   21811 pod_ready.go:81] duration metric: took 12.4522ms for pod "kube-apiserver-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.020821   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.020886   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m02
	I0610 10:41:15.020896   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.020906   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.020914   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.023901   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:15.024541   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:15.024558   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.024568   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.024577   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.027137   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:15.027507   21811 pod_ready.go:92] pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:15.027524   21811 pod_ready.go:81] duration metric: took 6.696061ms for pod "kube-apiserver-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.027536   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:15.027605   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m03
	I0610 10:41:15.027618   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.027628   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.027633   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.030410   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:15.053192   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:15.053217   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.053226   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.053230   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.056115   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:15.528482   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m03
	I0610 10:41:15.528501   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.528509   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.528513   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.532201   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:15.532997   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:15.533012   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:15.533019   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:15.533023   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:15.535607   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:16.027885   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-565925-m03
	I0610 10:41:16.027909   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.027917   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.027923   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.031124   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:16.031926   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:16.031991   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.032006   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.032012   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.034578   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:16.035129   21811 pod_ready.go:92] pod "kube-apiserver-ha-565925-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:16.035149   21811 pod_ready.go:81] duration metric: took 1.007600126s for pod "kube-apiserver-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.035158   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.053585   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925
	I0610 10:41:16.053608   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.053616   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.053620   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.057108   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:16.253181   21811 request.go:629] Waited for 195.000831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:16.253739   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:16.253746   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.253755   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.253759   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.256995   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:16.257598   21811 pod_ready.go:92] pod "kube-controller-manager-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:16.257615   21811 pod_ready.go:81] duration metric: took 222.449236ms for pod "kube-controller-manager-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.257625   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.453184   21811 request.go:629] Waited for 195.504869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m02
	I0610 10:41:16.453245   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m02
	I0610 10:41:16.453257   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.453277   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.453284   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.456483   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:16.653657   21811 request.go:629] Waited for 196.360099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:16.653706   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:16.653711   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.653717   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.653721   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.656972   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:16.657733   21811 pod_ready.go:92] pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:16.657756   21811 pod_ready.go:81] duration metric: took 400.123605ms for pod "kube-controller-manager-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.657769   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:16.853705   21811 request.go:629] Waited for 195.851399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m03
	I0610 10:41:16.853763   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-565925-m03
	I0610 10:41:16.853768   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:16.853774   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:16.853780   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:16.857401   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:17.053481   21811 request.go:629] Waited for 195.377671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:17.053543   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:17.053548   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:17.053554   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:17.053558   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:17.056457   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:17.056869   21811 pod_ready.go:92] pod "kube-controller-manager-ha-565925-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:17.056887   21811 pod_ready.go:81] duration metric: took 399.110601ms for pod "kube-controller-manager-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:17.056897   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d44ft" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:17.253959   21811 request.go:629] Waited for 197.000123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d44ft
	I0610 10:41:17.254034   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d44ft
	I0610 10:41:17.254039   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:17.254046   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:17.254052   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:17.259452   21811 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 10:41:17.453381   21811 request.go:629] Waited for 193.283661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:17.453443   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:17.453457   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:17.453467   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:17.453478   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:17.456665   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:17.457111   21811 pod_ready.go:92] pod "kube-proxy-d44ft" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:17.457130   21811 pod_ready.go:81] duration metric: took 400.226885ms for pod "kube-proxy-d44ft" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:17.457143   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbgnx" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:17.653265   21811 request.go:629] Waited for 196.03805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbgnx
	I0610 10:41:17.653330   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbgnx
	I0610 10:41:17.653338   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:17.653352   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:17.653360   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:17.657669   21811 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 10:41:17.853857   21811 request.go:629] Waited for 195.217398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:17.853945   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:17.853956   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:17.853967   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:17.853973   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:17.857603   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:17.858165   21811 pod_ready.go:92] pod "kube-proxy-vbgnx" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:17.858196   21811 pod_ready.go:81] duration metric: took 401.034656ms for pod "kube-proxy-vbgnx" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:17.858210   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wdjhn" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:18.053438   21811 request.go:629] Waited for 195.16165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wdjhn
	I0610 10:41:18.053510   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wdjhn
	I0610 10:41:18.053515   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:18.053522   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:18.053528   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:18.061200   21811 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 10:41:18.253302   21811 request.go:629] Waited for 191.397214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:18.253360   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:18.253365   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:18.253372   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:18.253375   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:18.256843   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:18.257608   21811 pod_ready.go:92] pod "kube-proxy-wdjhn" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:18.257631   21811 pod_ready.go:81] duration metric: took 399.412602ms for pod "kube-proxy-wdjhn" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:18.257645   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:18.453730   21811 request.go:629] Waited for 196.017576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925
	I0610 10:41:18.453827   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925
	I0610 10:41:18.453838   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:18.453849   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:18.453858   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:18.456757   21811 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 10:41:18.653818   21811 request.go:629] Waited for 196.381655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:18.653871   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 10:41:18.653876   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:18.653883   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:18.653887   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:18.657171   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:18.657641   21811 pod_ready.go:92] pod "kube-scheduler-ha-565925" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:18.657659   21811 pod_ready.go:81] duration metric: took 400.006901ms for pod "kube-scheduler-ha-565925" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:18.657668   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:18.853312   21811 request.go:629] Waited for 195.566307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m02
	I0610 10:41:18.853373   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m02
	I0610 10:41:18.853379   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:18.853386   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:18.853390   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:18.856573   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:19.053424   21811 request.go:629] Waited for 196.332307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:19.053489   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m02
	I0610 10:41:19.053494   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.053501   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.053505   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.056878   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:19.057621   21811 pod_ready.go:92] pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:19.057644   21811 pod_ready.go:81] duration metric: took 399.969423ms for pod "kube-scheduler-ha-565925-m02" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:19.057657   21811 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:19.253647   21811 request.go:629] Waited for 195.908915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m03
	I0610 10:41:19.253728   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-565925-m03
	I0610 10:41:19.253741   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.253751   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.253760   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.257377   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:19.453389   21811 request.go:629] Waited for 195.357232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:19.453455   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m03
	I0610 10:41:19.453462   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.453472   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.453477   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.456783   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:19.457428   21811 pod_ready.go:92] pod "kube-scheduler-ha-565925-m03" in "kube-system" namespace has status "Ready":"True"
	I0610 10:41:19.457447   21811 pod_ready.go:81] duration metric: took 399.782461ms for pod "kube-scheduler-ha-565925-m03" in "kube-system" namespace to be "Ready" ...
	I0610 10:41:19.457458   21811 pod_ready.go:38] duration metric: took 12.001075789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 10:41:19.457474   21811 api_server.go:52] waiting for apiserver process to appear ...
	I0610 10:41:19.457524   21811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:41:19.472808   21811 api_server.go:72] duration metric: took 19.36982533s to wait for apiserver process to appear ...
	I0610 10:41:19.472837   21811 api_server.go:88] waiting for apiserver healthz status ...
	I0610 10:41:19.472856   21811 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0610 10:41:19.478589   21811 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I0610 10:41:19.478658   21811 round_trippers.go:463] GET https://192.168.39.208:8443/version
	I0610 10:41:19.478666   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.478676   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.478686   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.479654   21811 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0610 10:41:19.479739   21811 api_server.go:141] control plane version: v1.30.1
	I0610 10:41:19.479752   21811 api_server.go:131] duration metric: took 6.910869ms to wait for apiserver health ...
	I0610 10:41:19.479759   21811 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 10:41:19.653486   21811 request.go:629] Waited for 173.661312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:41:19.653542   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:41:19.653547   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.653559   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.653563   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.660708   21811 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 10:41:19.667056   21811 system_pods.go:59] 24 kube-system pods found
	I0610 10:41:19.667101   21811 system_pods.go:61] "coredns-7db6d8ff4d-545cf" [7564efde-b96c-48b3-b194-bca695f7ae95] Running
	I0610 10:41:19.667109   21811 system_pods.go:61] "coredns-7db6d8ff4d-wn6nh" [9e47f047-e98b-48c8-8a33-8f790a3e8017] Running
	I0610 10:41:19.667115   21811 system_pods.go:61] "etcd-ha-565925" [527cd8fc-9ac8-4432-a265-910957e9268f] Running
	I0610 10:41:19.667121   21811 system_pods.go:61] "etcd-ha-565925-m02" [7068fe45-72fe-4204-8742-d8803e585954] Running
	I0610 10:41:19.667128   21811 system_pods.go:61] "etcd-ha-565925-m03" [91c6bcb4-59b4-4a31-a5e4-f32d9491b566] Running
	I0610 10:41:19.667133   21811 system_pods.go:61] "kindnet-9jv7q" [2f97ff84-bae1-4e63-9e9a-08e9e7afe68b] Running
	I0610 10:41:19.667139   21811 system_pods.go:61] "kindnet-9tcng" [c47fe372-aee9-4fb2-9c62-b84341af1c81] Running
	I0610 10:41:19.667144   21811 system_pods.go:61] "kindnet-rnn59" [9141e131-eebc-4f51-8b55-46ff649ffaee] Running
	I0610 10:41:19.667151   21811 system_pods.go:61] "kube-apiserver-ha-565925" [75b7b060-85f2-45ca-a58e-a42a8c2d7fab] Running
	I0610 10:41:19.667164   21811 system_pods.go:61] "kube-apiserver-ha-565925-m02" [a7e4eed5-4ada-4063-a8e1-f82ed820f684] Running
	I0610 10:41:19.667171   21811 system_pods.go:61] "kube-apiserver-ha-565925-m03" [225e7590-3610-4bce-9224-88a67f0f7226] Running
	I0610 10:41:19.667181   21811 system_pods.go:61] "kube-controller-manager-ha-565925" [cd41ddc9-22af-4789-a9ea-3663a5de415b] Running
	I0610 10:41:19.667190   21811 system_pods.go:61] "kube-controller-manager-ha-565925-m02" [6b2d5860-4e09-4eeb-a9e3-24952ec3fab4] Running
	I0610 10:41:19.667200   21811 system_pods.go:61] "kube-controller-manager-ha-565925-m03" [2f1dc404-5a14-4ced-ba6d-746e6cd75e57] Running
	I0610 10:41:19.667206   21811 system_pods.go:61] "kube-proxy-d44ft" [7a77472b-d577-4781-bc02-70dbe0c31659] Running
	I0610 10:41:19.667215   21811 system_pods.go:61] "kube-proxy-vbgnx" [f43735ae-adc0-4fe4-992e-b640b52886d7] Running
	I0610 10:41:19.667222   21811 system_pods.go:61] "kube-proxy-wdjhn" [da3ac11b-0906-4695-80b1-f3f4f1a34de1] Running
	I0610 10:41:19.667228   21811 system_pods.go:61] "kube-scheduler-ha-565925" [74663e0a-7f9e-4211-b165-39358cb3b3e2] Running
	I0610 10:41:19.667235   21811 system_pods.go:61] "kube-scheduler-ha-565925-m02" [745d6073-f0af-4aa5-9345-38c9b88dad69] Running
	I0610 10:41:19.667244   21811 system_pods.go:61] "kube-scheduler-ha-565925-m03" [844a6fd4-2d91-47fb-b692-c899c7461a32] Running
	I0610 10:41:19.667251   21811 system_pods.go:61] "kube-vip-ha-565925" [039ffa3e-aac6-4bdc-a576-0158c7fb283d] Running
	I0610 10:41:19.667260   21811 system_pods.go:61] "kube-vip-ha-565925-m02" [f28be16a-38b2-4746-8b18-ab0014783aad] Running
	I0610 10:41:19.667269   21811 system_pods.go:61] "kube-vip-ha-565925-m03" [de1604b6-d98b-4be7-a72e-5500cc89e497] Running
	I0610 10:41:19.667274   21811 system_pods.go:61] "storage-provisioner" [0ca60a36-c445-4520-b857-7df39dfed848] Running
	I0610 10:41:19.667283   21811 system_pods.go:74] duration metric: took 187.51707ms to wait for pod list to return data ...
	I0610 10:41:19.667297   21811 default_sa.go:34] waiting for default service account to be created ...
	I0610 10:41:19.853710   21811 request.go:629] Waited for 186.338285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I0610 10:41:19.853781   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/default/serviceaccounts
	I0610 10:41:19.853789   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:19.853798   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:19.853804   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:19.857692   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:19.857827   21811 default_sa.go:45] found service account: "default"
	I0610 10:41:19.857845   21811 default_sa.go:55] duration metric: took 190.537888ms for default service account to be created ...
	I0610 10:41:19.857853   21811 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 10:41:20.054008   21811 request.go:629] Waited for 196.075313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:41:20.054068   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 10:41:20.054073   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:20.054080   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:20.054086   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:20.061196   21811 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 10:41:20.067104   21811 system_pods.go:86] 24 kube-system pods found
	I0610 10:41:20.067130   21811 system_pods.go:89] "coredns-7db6d8ff4d-545cf" [7564efde-b96c-48b3-b194-bca695f7ae95] Running
	I0610 10:41:20.067136   21811 system_pods.go:89] "coredns-7db6d8ff4d-wn6nh" [9e47f047-e98b-48c8-8a33-8f790a3e8017] Running
	I0610 10:41:20.067140   21811 system_pods.go:89] "etcd-ha-565925" [527cd8fc-9ac8-4432-a265-910957e9268f] Running
	I0610 10:41:20.067144   21811 system_pods.go:89] "etcd-ha-565925-m02" [7068fe45-72fe-4204-8742-d8803e585954] Running
	I0610 10:41:20.067148   21811 system_pods.go:89] "etcd-ha-565925-m03" [91c6bcb4-59b4-4a31-a5e4-f32d9491b566] Running
	I0610 10:41:20.067153   21811 system_pods.go:89] "kindnet-9jv7q" [2f97ff84-bae1-4e63-9e9a-08e9e7afe68b] Running
	I0610 10:41:20.067157   21811 system_pods.go:89] "kindnet-9tcng" [c47fe372-aee9-4fb2-9c62-b84341af1c81] Running
	I0610 10:41:20.067161   21811 system_pods.go:89] "kindnet-rnn59" [9141e131-eebc-4f51-8b55-46ff649ffaee] Running
	I0610 10:41:20.067166   21811 system_pods.go:89] "kube-apiserver-ha-565925" [75b7b060-85f2-45ca-a58e-a42a8c2d7fab] Running
	I0610 10:41:20.067174   21811 system_pods.go:89] "kube-apiserver-ha-565925-m02" [a7e4eed5-4ada-4063-a8e1-f82ed820f684] Running
	I0610 10:41:20.067178   21811 system_pods.go:89] "kube-apiserver-ha-565925-m03" [225e7590-3610-4bce-9224-88a67f0f7226] Running
	I0610 10:41:20.067185   21811 system_pods.go:89] "kube-controller-manager-ha-565925" [cd41ddc9-22af-4789-a9ea-3663a5de415b] Running
	I0610 10:41:20.067190   21811 system_pods.go:89] "kube-controller-manager-ha-565925-m02" [6b2d5860-4e09-4eeb-a9e3-24952ec3fab4] Running
	I0610 10:41:20.067198   21811 system_pods.go:89] "kube-controller-manager-ha-565925-m03" [2f1dc404-5a14-4ced-ba6d-746e6cd75e57] Running
	I0610 10:41:20.067202   21811 system_pods.go:89] "kube-proxy-d44ft" [7a77472b-d577-4781-bc02-70dbe0c31659] Running
	I0610 10:41:20.067209   21811 system_pods.go:89] "kube-proxy-vbgnx" [f43735ae-adc0-4fe4-992e-b640b52886d7] Running
	I0610 10:41:20.067213   21811 system_pods.go:89] "kube-proxy-wdjhn" [da3ac11b-0906-4695-80b1-f3f4f1a34de1] Running
	I0610 10:41:20.067220   21811 system_pods.go:89] "kube-scheduler-ha-565925" [74663e0a-7f9e-4211-b165-39358cb3b3e2] Running
	I0610 10:41:20.067223   21811 system_pods.go:89] "kube-scheduler-ha-565925-m02" [745d6073-f0af-4aa5-9345-38c9b88dad69] Running
	I0610 10:41:20.067230   21811 system_pods.go:89] "kube-scheduler-ha-565925-m03" [844a6fd4-2d91-47fb-b692-c899c7461a32] Running
	I0610 10:41:20.067233   21811 system_pods.go:89] "kube-vip-ha-565925" [039ffa3e-aac6-4bdc-a576-0158c7fb283d] Running
	I0610 10:41:20.067239   21811 system_pods.go:89] "kube-vip-ha-565925-m02" [f28be16a-38b2-4746-8b18-ab0014783aad] Running
	I0610 10:41:20.067243   21811 system_pods.go:89] "kube-vip-ha-565925-m03" [de1604b6-d98b-4be7-a72e-5500cc89e497] Running
	I0610 10:41:20.067249   21811 system_pods.go:89] "storage-provisioner" [0ca60a36-c445-4520-b857-7df39dfed848] Running
	I0610 10:41:20.067254   21811 system_pods.go:126] duration metric: took 209.396723ms to wait for k8s-apps to be running ...
	I0610 10:41:20.067264   21811 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 10:41:20.067300   21811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:41:20.082850   21811 system_svc.go:56] duration metric: took 15.577071ms WaitForService to wait for kubelet
	I0610 10:41:20.082882   21811 kubeadm.go:576] duration metric: took 19.979901985s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:41:20.082908   21811 node_conditions.go:102] verifying NodePressure condition ...
	I0610 10:41:20.253501   21811 request.go:629] Waited for 170.515902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.208:8443/api/v1/nodes
	I0610 10:41:20.253562   21811 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes
	I0610 10:41:20.253570   21811 round_trippers.go:469] Request Headers:
	I0610 10:41:20.253582   21811 round_trippers.go:473]     Accept: application/json, */*
	I0610 10:41:20.253591   21811 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 10:41:20.257357   21811 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 10:41:20.258463   21811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:41:20.258490   21811 node_conditions.go:123] node cpu capacity is 2
	I0610 10:41:20.258505   21811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:41:20.258510   21811 node_conditions.go:123] node cpu capacity is 2
	I0610 10:41:20.258515   21811 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 10:41:20.258518   21811 node_conditions.go:123] node cpu capacity is 2
	I0610 10:41:20.258523   21811 node_conditions.go:105] duration metric: took 175.609245ms to run NodePressure ...
	I0610 10:41:20.258536   21811 start.go:240] waiting for startup goroutines ...
	I0610 10:41:20.258563   21811 start.go:254] writing updated cluster config ...
	I0610 10:41:20.258930   21811 ssh_runner.go:195] Run: rm -f paused
	I0610 10:41:20.312399   21811 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 10:41:20.314679   21811 out.go:177] * Done! kubectl is now configured to use "ha-565925" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.782697856Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a6083f2-2ccc-4eea-bace-13147ad90f64 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.784116392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f4d17d8-3117-4af6-985e-250110f311f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.784737993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016354784712603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f4d17d8-3117-4af6-985e-250110f311f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.788939230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e3a554f-9667-4e43-af55-bb37709c0cbf name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.788997402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e3a554f-9667-4e43-af55-bb37709c0cbf name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.789220251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016084446089772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7132613c40918526f05a0d1ea655de838d95cdfc74880ab8c90e7b98b32ee7cc,PodSandboxId:de365696855f1fe15558874733bf40446cd8ab359b3d632ae71d8cd5f32d98b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718015930142271529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930175570021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930144728548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e9
8b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c76fc1da29c41233d9d8517a0d5b17f146c7cde3802483aab50bc3ba11b78b,PodSandboxId:71cfc7bcda08cf3e1c90d0f5cf5f33fc51fb4dd5f028ab6590d0b19f056460dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718015928620479055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171801592
5064900688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbb62793adf92dc3d7d5d72b02fb98e653c558237baa7067bce51a5b0c25553,PodSandboxId:235cdb6eec97308e5c02c06c504736e6bcecc139bc81369249fd408eb0a4a674,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17180159080
18315319,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7458bb04dd39e8e0618ded8278600c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718015904609356393,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718015904613208681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243a70e2c1f2d12414697f36420e1832aa5b0376a87efc3acc5785d8295da364,PodSandboxId:f17389e4e287341cc04675fc44f2af0a57d0270453e694289f6c820fa120ef66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718015904641357133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496,PodSandboxId:5319b527fdd15e4a549cd2140bbe1e0e473956046be736501f4f1692b6a0a208,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718015904537481240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e3a554f-9667-4e43-af55-bb37709c0cbf name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.811788222Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b383052b-0cd5-41b7-a016-1785bd2e725a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.812329320Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-6wmkd,Uid:f8a1e0dc-e561-4def-9787-c5d0eda08fda,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718016081571821297,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:41:21.254050246Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:de365696855f1fe15558874733bf40446cd8ab359b3d632ae71d8cd5f32d98b7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0ca60a36-c445-4520-b857-7df39dfed848,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1718015929917188093,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-10T10:38:49.603428236Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-545cf,Uid:7564efde-b96c-48b3-b194-bca695f7ae95,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718015929906528064,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:49.597228433Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wn6nh,Uid:9e47f047-e98b-48c8-8a33-8f790a3e8017,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1718015929897339287,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:49.589282044Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:71cfc7bcda08cf3e1c90d0f5cf5f33fc51fb4dd5f028ab6590d0b19f056460dd,Metadata:&PodSandboxMetadata{Name:kindnet-rnn59,Uid:9141e131-eebc-4f51-8b55-46ff649ffaee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718015924979714480,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:44.065979711Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&PodSandboxMetadata{Name:kube-proxy-wdjhn,Uid:da3ac11b-0906-4695-80b1-f3f4f1a34de1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718015924947847763,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:44.034881743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5319b527fdd15e4a549cd2140bbe1e0e473956046be736501f4f1692b6a0a208,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-565925,Uid:d811c4cb2aa091785cd31dce6f7bed4f,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1718015904399594883,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d811c4cb2aa091785cd31dce6f7bed4f,kubernetes.io/config.seen: 2024-06-10T10:38:23.929497802Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:235cdb6eec97308e5c02c06c504736e6bcecc139bc81369249fd408eb0a4a674,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-565925,Uid:a7458bb04dd39e8e0618ded8278600c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718015904390163229,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7458bb04dd39e8e0618ded8278600c9,},Annotations:map[string]string{kube
rnetes.io/config.hash: a7458bb04dd39e8e0618ded8278600c9,kubernetes.io/config.seen: 2024-06-10T10:38:23.929499690Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-565925,Uid:0160bc841c85a002ebb521cea7065bc7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718015904389869820,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0160bc841c85a002ebb521cea7065bc7,kubernetes.io/config.seen: 2024-06-10T10:38:23.929498825Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f17389e4e287341cc04675fc44f2af0a57d0270453e694289f6c820fa120ef66,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-565925,Ui
d:12d1dab5f9db3366c19df7ea45438b14,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718015904389454557,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.208:8443,kubernetes.io/config.hash: 12d1dab5f9db3366c19df7ea45438b14,kubernetes.io/config.seen: 2024-06-10T10:38:23.929496637Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&PodSandboxMetadata{Name:etcd-ha-565925,Uid:24c16c67f513f809f76a7bbd749e01f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718015904383823165,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-565925,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.208:2379,kubernetes.io/config.hash: 24c16c67f513f809f76a7bbd749e01f3,kubernetes.io/config.seen: 2024-06-10T10:38:23.929492639Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b383052b-0cd5-41b7-a016-1785bd2e725a name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.813406680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78a5dc1a-9811-4af1-b8e2-0e81db76b470 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.813463979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78a5dc1a-9811-4af1-b8e2-0e81db76b470 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.813797925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016084446089772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7132613c40918526f05a0d1ea655de838d95cdfc74880ab8c90e7b98b32ee7cc,PodSandboxId:de365696855f1fe15558874733bf40446cd8ab359b3d632ae71d8cd5f32d98b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718015930142271529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930175570021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930144728548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e9
8b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c76fc1da29c41233d9d8517a0d5b17f146c7cde3802483aab50bc3ba11b78b,PodSandboxId:71cfc7bcda08cf3e1c90d0f5cf5f33fc51fb4dd5f028ab6590d0b19f056460dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718015928620479055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171801592
5064900688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbb62793adf92dc3d7d5d72b02fb98e653c558237baa7067bce51a5b0c25553,PodSandboxId:235cdb6eec97308e5c02c06c504736e6bcecc139bc81369249fd408eb0a4a674,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17180159080
18315319,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7458bb04dd39e8e0618ded8278600c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718015904609356393,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718015904613208681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243a70e2c1f2d12414697f36420e1832aa5b0376a87efc3acc5785d8295da364,PodSandboxId:f17389e4e287341cc04675fc44f2af0a57d0270453e694289f6c820fa120ef66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718015904641357133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496,PodSandboxId:5319b527fdd15e4a549cd2140bbe1e0e473956046be736501f4f1692b6a0a208,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718015904537481240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78a5dc1a-9811-4af1-b8e2-0e81db76b470 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.834626461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5435a1ea-59eb-4706-81cc-b765c72b6493 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.834705271Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5435a1ea-59eb-4706-81cc-b765c72b6493 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.835931406Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c2ffc18-064f-4b63-a31f-261ea543deb5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.836364601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016354836344971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c2ffc18-064f-4b63-a31f-261ea543deb5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.836978853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2afcd255-4a0d-4fac-af26-6d9b1f04f961 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.837050647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2afcd255-4a0d-4fac-af26-6d9b1f04f961 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.837268752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016084446089772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7132613c40918526f05a0d1ea655de838d95cdfc74880ab8c90e7b98b32ee7cc,PodSandboxId:de365696855f1fe15558874733bf40446cd8ab359b3d632ae71d8cd5f32d98b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718015930142271529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930175570021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930144728548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e9
8b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c76fc1da29c41233d9d8517a0d5b17f146c7cde3802483aab50bc3ba11b78b,PodSandboxId:71cfc7bcda08cf3e1c90d0f5cf5f33fc51fb4dd5f028ab6590d0b19f056460dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718015928620479055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171801592
5064900688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbb62793adf92dc3d7d5d72b02fb98e653c558237baa7067bce51a5b0c25553,PodSandboxId:235cdb6eec97308e5c02c06c504736e6bcecc139bc81369249fd408eb0a4a674,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17180159080
18315319,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7458bb04dd39e8e0618ded8278600c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718015904609356393,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718015904613208681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243a70e2c1f2d12414697f36420e1832aa5b0376a87efc3acc5785d8295da364,PodSandboxId:f17389e4e287341cc04675fc44f2af0a57d0270453e694289f6c820fa120ef66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718015904641357133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496,PodSandboxId:5319b527fdd15e4a549cd2140bbe1e0e473956046be736501f4f1692b6a0a208,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718015904537481240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2afcd255-4a0d-4fac-af26-6d9b1f04f961 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.880507246Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6477adf1-85a6-47c7-9eb5-3946f53ed4bb name=/runtime.v1.RuntimeService/Version
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.880622992Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6477adf1-85a6-47c7-9eb5-3946f53ed4bb name=/runtime.v1.RuntimeService/Version
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.882188065Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7ff3faf2-9490-4048-b6ec-763d70a47d85 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.883161291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016354883127324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7ff3faf2-9490-4048-b6ec-763d70a47d85 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.883830340Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b2bbaeb-0663-4ee9-abd2-fb502d70173a name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.883910717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b2bbaeb-0663-4ee9-abd2-fb502d70173a name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:45:54 ha-565925 crio[681]: time="2024-06-10 10:45:54.884261395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016084446089772,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7132613c40918526f05a0d1ea655de838d95cdfc74880ab8c90e7b98b32ee7cc,PodSandboxId:de365696855f1fe15558874733bf40446cd8ab359b3d632ae71d8cd5f32d98b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718015930142271529,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930175570021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718015930144728548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e9
8b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7c76fc1da29c41233d9d8517a0d5b17f146c7cde3802483aab50bc3ba11b78b,PodSandboxId:71cfc7bcda08cf3e1c90d0f5cf5f33fc51fb4dd5f028ab6590d0b19f056460dd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CO
NTAINER_RUNNING,CreatedAt:1718015928620479055,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:171801592
5064900688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbbb62793adf92dc3d7d5d72b02fb98e653c558237baa7067bce51a5b0c25553,PodSandboxId:235cdb6eec97308e5c02c06c504736e6bcecc139bc81369249fd408eb0a4a674,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17180159080
18315319,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7458bb04dd39e8e0618ded8278600c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718015904609356393,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718015904613208681,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:243a70e2c1f2d12414697f36420e1832aa5b0376a87efc3acc5785d8295da364,PodSandboxId:f17389e4e287341cc04675fc44f2af0a57d0270453e694289f6c820fa120ef66,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718015904641357133,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: ku
be-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496,PodSandboxId:5319b527fdd15e4a549cd2140bbe1e0e473956046be736501f4f1692b6a0a208,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718015904537481240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b2bbaeb-0663-4ee9-abd2-fb502d70173a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e2874c04d7e60       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   4f03a24f1c978       busybox-fc5497c4f-6wmkd
	1f037e4537f61       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   937195f055767       coredns-7db6d8ff4d-545cf
	534a412f3a743       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   b454f12ed3fe0       coredns-7db6d8ff4d-wn6nh
	7132613c40918       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   de365696855f1       storage-provisioner
	c7c76fc1da29c       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    7 minutes ago       Running             kindnet-cni               0                   71cfc7bcda08c       kindnet-rnn59
	fa492285e9f66       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago       Running             kube-proxy                0                   9c2610533ce93       kube-proxy-wdjhn
	fbbb62793adf9       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   235cdb6eec973       kube-vip-ha-565925
	243a70e2c1f2d       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago       Running             kube-apiserver            0                   f17389e4e2873       kube-apiserver-ha-565925
	538119110afb1       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago       Running             kube-scheduler            0                   1c1c2a5704369       kube-scheduler-ha-565925
	15b93b06d8221       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   ae49609366208       etcd-ha-565925
	bcf7ff93de6e7       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago       Running             kube-controller-manager   0                   5319b527fdd15       kube-controller-manager-ha-565925
	
	
	==> coredns [1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163] <==
	[INFO] 127.0.0.1:34561 - 56492 "HINFO IN 3219957272136125807.6377571563397303703. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009858319s
	[INFO] 10.244.0.4:54950 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.012741474s
	[INFO] 10.244.1.2:48212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000372595s
	[INFO] 10.244.1.2:38672 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000558623s
	[INFO] 10.244.1.2:39378 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001712401s
	[INFO] 10.244.2.2:60283 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000168931s
	[INFO] 10.244.0.4:44797 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009875834s
	[INFO] 10.244.0.4:48555 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169499s
	[INFO] 10.244.0.4:59395 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177597s
	[INFO] 10.244.1.2:59265 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000530757s
	[INFO] 10.244.1.2:47710 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001604733s
	[INFO] 10.244.1.2:52315 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138586s
	[INFO] 10.244.2.2:55693 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155911s
	[INFO] 10.244.2.2:58799 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094891s
	[INFO] 10.244.2.2:42423 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109708s
	[INFO] 10.244.0.4:50874 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174304s
	[INFO] 10.244.1.2:48744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098356s
	[INFO] 10.244.1.2:57572 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107588s
	[INFO] 10.244.1.2:43906 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000582793s
	[INFO] 10.244.0.4:36933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083881s
	[INFO] 10.244.0.4:57895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011453s
	[INFO] 10.244.1.2:33157 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149048s
	[INFO] 10.244.1.2:51327 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000136605s
	[INFO] 10.244.1.2:57659 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126557s
	[INFO] 10.244.2.2:42606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153767s
	
	
	==> coredns [534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f] <==
	[INFO] 10.244.0.4:42272 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000196791s
	[INFO] 10.244.1.2:51041 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144884s
	[INFO] 10.244.1.2:56818 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001759713s
	[INFO] 10.244.1.2:38288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.001994069s
	[INFO] 10.244.1.2:34752 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150866s
	[INFO] 10.244.1.2:40260 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146857s
	[INFO] 10.244.2.2:44655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154352s
	[INFO] 10.244.2.2:33459 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001816989s
	[INFO] 10.244.2.2:44738 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000324114s
	[INFO] 10.244.2.2:47736 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091876s
	[INFO] 10.244.2.2:44490 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001443467s
	[INFO] 10.244.0.4:55625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175656s
	[INFO] 10.244.0.4:39661 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080931s
	[INFO] 10.244.0.4:50296 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000636942s
	[INFO] 10.244.1.2:38824 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118172s
	[INFO] 10.244.2.2:42842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216365s
	[INFO] 10.244.2.2:59068 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011868s
	[INFO] 10.244.2.2:38486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000206394s
	[INFO] 10.244.2.2:33649 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110039s
	[INFO] 10.244.0.4:39573 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000202562s
	[INFO] 10.244.0.4:57326 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128886s
	[INFO] 10.244.1.2:39682 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000217002s
	[INFO] 10.244.2.2:39360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000367518s
	[INFO] 10.244.2.2:55914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000522453s
	[INFO] 10.244.2.2:54263 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00020711s
	
	
	==> describe nodes <==
	Name:               ha-565925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T10_38_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:45:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:41:34 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:41:34 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:41:34 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:41:34 +0000   Mon, 10 Jun 2024 10:38:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    ha-565925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 81e39b112b50436db5c7fc16ce8eb53e
	  System UUID:                81e39b11-2b50-436d-b5c7-fc16ce8eb53e
	  Boot ID:                    afd4fe8d-84f7-41ff-9890-dc78b1ff1343
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6wmkd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 coredns-7db6d8ff4d-545cf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m12s
	  kube-system                 coredns-7db6d8ff4d-wn6nh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m12s
	  kube-system                 etcd-ha-565925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m25s
	  kube-system                 kindnet-rnn59                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m11s
	  kube-system                 kube-apiserver-ha-565925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  kube-system                 kube-controller-manager-ha-565925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  kube-system                 kube-proxy-wdjhn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 kube-scheduler-ha-565925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  kube-system                 kube-vip-ha-565925                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m9s   kube-proxy       
	  Normal  Starting                 7m25s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m25s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m25s  kubelet          Node ha-565925 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m25s  kubelet          Node ha-565925 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m25s  kubelet          Node ha-565925 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m12s  node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal  NodeReady                7m6s   kubelet          Node ha-565925 status is now: NodeReady
	  Normal  RegisteredNode           5m51s  node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal  RegisteredNode           4m41s  node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	
	
	Name:               ha-565925-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_39_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:39:47 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:42:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 10 Jun 2024 10:41:50 +0000   Mon, 10 Jun 2024 10:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 10 Jun 2024 10:41:50 +0000   Mon, 10 Jun 2024 10:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 10 Jun 2024 10:41:50 +0000   Mon, 10 Jun 2024 10:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 10 Jun 2024 10:41:50 +0000   Mon, 10 Jun 2024 10:43:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-565925-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55a76fcaaea54bebb8694a2ff5e7d2ea
	  System UUID:                55a76fca-aea5-4beb-b869-4a2ff5e7d2ea
	  Boot ID:                    d5b6f0ad-b291-4951-bab9-e2cd70014f7f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8g67g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 etcd-ha-565925-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m6s
	  kube-system                 kindnet-9jv7q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m8s
	  kube-system                 kube-apiserver-ha-565925-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-controller-manager-ha-565925-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	  kube-system                 kube-proxy-vbgnx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m8s
	  kube-system                 kube-scheduler-ha-565925-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-vip-ha-565925-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  6m8s (x8 over 6m8s)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x8 over 6m8s)  kubelet          Node ha-565925-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m8s (x7 over 6m8s)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m7s                 node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           5m51s                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           4m41s                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  NodeNotReady             2m42s                node-controller  Node ha-565925-m02 status is now: NodeNotReady
	
	
	Name:               ha-565925-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_40_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:40:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:45:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:41:27 +0000   Mon, 10 Jun 2024 10:40:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:41:27 +0000   Mon, 10 Jun 2024 10:40:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:41:27 +0000   Mon, 10 Jun 2024 10:40:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:41:27 +0000   Mon, 10 Jun 2024 10:41:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-565925-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8de12ccd43b4441ac42fe5a4b57ed64
	  System UUID:                c8de12cc-d43b-4441-ac42-fe5a4b57ed64
	  Boot ID:                    d2c38454-f5bf-4fee-84c8-941e8e5709a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jmbg2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 etcd-ha-565925-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m57s
	  kube-system                 kindnet-9tcng                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m59s
	  kube-system                 kube-apiserver-ha-565925-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-ha-565925-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-d44ft                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-scheduler-ha-565925-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-vip-ha-565925-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m54s                  kube-proxy       
	  Normal  Starting                 4m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node ha-565925-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node ha-565925-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s (x7 over 4m59s)  kubelet          Node ha-565925-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m57s                  node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	  Normal  RegisteredNode           4m41s                  node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	
	
	Name:               ha-565925-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_41_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:41:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:45:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:42:29 +0000   Mon, 10 Jun 2024 10:41:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:42:29 +0000   Mon, 10 Jun 2024 10:41:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:42:29 +0000   Mon, 10 Jun 2024 10:41:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:42:29 +0000   Mon, 10 Jun 2024 10:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    ha-565925-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5196e1f9b5684ae78368fe8d66c3d24c
	  System UUID:                5196e1f9-b568-4ae7-8368-fe8d66c3d24c
	  Boot ID:                    ffecf9d5-cc7c-4751-819f-473afd63d8a7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-lkf5b       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-proxy-dpsbw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m57s (x2 over 3m57s)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s (x2 over 3m57s)  kubelet          Node ha-565925-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s (x2 over 3m57s)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal  RegisteredNode           3m56s                  node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal  NodeReady                3m46s                  kubelet          Node ha-565925-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jun10 10:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051910] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038738] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.451665] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Jun10 10:38] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.529458] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.150837] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.061096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061390] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.176128] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.114890] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264219] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.909095] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +3.637727] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.061637] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.135890] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.082129] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.392312] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.014769] kauditd_printk_skb: 43 callbacks suppressed
	[  +9.917879] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd] <==
	{"level":"warn","ts":"2024-06-10T10:45:55.170329Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.173997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.187631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.194923Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.204512Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.213462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.217541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.221562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.231033Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.238267Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.24486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.24896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.251514Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.253885Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.254084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.27507Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.287278Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.290982Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.305059Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.313116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.318952Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.334034Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.346309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.353303Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-06-10T10:45:55.387684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7fe6bf77aaafe0f6","from":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 10:45:55 up 8 min,  0 users,  load average: 0.36, 0.28, 0.18
	Linux ha-565925 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c7c76fc1da29c41233d9d8517a0d5b17f146c7cde3802483aab50bc3ba11b78b] <==
	I0610 10:45:19.632311       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:45:29.641016       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:45:29.641068       1 main.go:227] handling current node
	I0610 10:45:29.641083       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:45:29.641090       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:45:29.641284       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0610 10:45:29.641312       1 main.go:250] Node ha-565925-m03 has CIDR [10.244.2.0/24] 
	I0610 10:45:29.641402       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:45:29.641427       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:45:39.658064       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:45:39.658174       1 main.go:227] handling current node
	I0610 10:45:39.658204       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:45:39.658223       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:45:39.658358       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0610 10:45:39.658381       1 main.go:250] Node ha-565925-m03 has CIDR [10.244.2.0/24] 
	I0610 10:45:39.658439       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:45:39.658513       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:45:49.690373       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:45:49.690467       1 main.go:227] handling current node
	I0610 10:45:49.690495       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:45:49.690513       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:45:49.690676       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0610 10:45:49.690698       1 main.go:250] Node ha-565925-m03 has CIDR [10.244.2.0/24] 
	I0610 10:45:49.690830       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:45:49.690858       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [243a70e2c1f2d12414697f36420e1832aa5b0376a87efc3acc5785d8295da364] <==
	I0610 10:38:29.237625       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0610 10:38:29.243937       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.208]
	I0610 10:38:29.244948       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 10:38:29.249817       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 10:38:29.636376       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 10:38:30.878540       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 10:38:30.900420       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0610 10:38:30.918282       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 10:38:43.392950       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0610 10:38:43.998070       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0610 10:41:26.033150       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58026: use of closed network connection
	E0610 10:41:26.218468       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58040: use of closed network connection
	E0610 10:41:26.623437       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58094: use of closed network connection
	E0610 10:41:26.821697       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58112: use of closed network connection
	E0610 10:41:27.006953       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58122: use of closed network connection
	E0610 10:41:27.183339       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58136: use of closed network connection
	E0610 10:41:27.374566       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58168: use of closed network connection
	E0610 10:41:27.583065       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58186: use of closed network connection
	E0610 10:41:27.867867       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58208: use of closed network connection
	E0610 10:41:28.042423       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58226: use of closed network connection
	E0610 10:41:28.222259       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58254: use of closed network connection
	E0610 10:41:28.407977       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58264: use of closed network connection
	E0610 10:41:28.591934       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58278: use of closed network connection
	E0610 10:41:28.765354       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:58298: use of closed network connection
	W0610 10:42:39.253920       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.208 192.168.39.76]
	
	
	==> kube-controller-manager [bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496] <==
	I0610 10:39:48.292977       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m02"
	I0610 10:40:56.714956       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-565925-m03\" does not exist"
	I0610 10:40:56.729394       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-565925-m03" podCIDRs=["10.244.2.0/24"]
	I0610 10:40:58.317071       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m03"
	I0610 10:41:21.260124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="102.475697ms"
	I0610 10:41:21.334558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="74.092337ms"
	I0610 10:41:21.578712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="244.097947ms"
	I0610 10:41:21.621353       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.528024ms"
	I0610 10:41:21.621483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.431µs"
	I0610 10:41:21.752445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.356922ms"
	I0610 10:41:21.752544       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.295µs"
	I0610 10:41:24.856380       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.71329ms"
	I0610 10:41:24.856660       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.24µs"
	I0610 10:41:24.990853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.855541ms"
	I0610 10:41:24.991230       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="104.649µs"
	I0610 10:41:25.603425       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.496252ms"
	I0610 10:41:25.603541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.117µs"
	E0610 10:41:58.368353       1 certificate_controller.go:146] Sync csr-hcqgx failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-hcqgx": the object has been modified; please apply your changes to the latest version and try again
	I0610 10:41:58.654308       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-565925-m04\" does not exist"
	I0610 10:41:58.675229       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-565925-m04" podCIDRs=["10.244.3.0/24"]
	I0610 10:42:03.579018       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m04"
	I0610 10:42:09.621414       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565925-m04"
	I0610 10:43:13.605629       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565925-m04"
	I0610 10:43:13.686126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.735253ms"
	I0610 10:43:13.686420       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="126.528µs"
	
	
	==> kube-proxy [fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91] <==
	I0610 10:38:45.218661       1 server_linux.go:69] "Using iptables proxy"
	I0610 10:38:45.235348       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.208"]
	I0610 10:38:45.279266       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 10:38:45.279353       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 10:38:45.279377       1 server_linux.go:165] "Using iptables Proxier"
	I0610 10:38:45.282213       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 10:38:45.282534       1 server.go:872] "Version info" version="v1.30.1"
	I0610 10:38:45.282607       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:38:45.284663       1 config.go:192] "Starting service config controller"
	I0610 10:38:45.284789       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 10:38:45.284861       1 config.go:101] "Starting endpoint slice config controller"
	I0610 10:38:45.284923       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 10:38:45.286425       1 config.go:319] "Starting node config controller"
	I0610 10:38:45.286476       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 10:38:45.385453       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 10:38:45.385461       1 shared_informer.go:320] Caches are synced for service config
	I0610 10:38:45.386991       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82] <==
	E0610 10:38:28.841032       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 10:38:29.100887       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 10:38:29.101379       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 10:38:32.184602       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0610 10:40:56.778133       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-9tcng\": pod kindnet-9tcng is already assigned to node \"ha-565925-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-9tcng" node="ha-565925-m03"
	E0610 10:40:56.778301       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c47fe372-aee9-4fb2-9c62-b84341af1c81(kube-system/kindnet-9tcng) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-9tcng"
	E0610 10:40:56.778331       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-9tcng\": pod kindnet-9tcng is already assigned to node \"ha-565925-m03\"" pod="kube-system/kindnet-9tcng"
	I0610 10:40:56.778371       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-9tcng" node="ha-565925-m03"
	E0610 10:40:56.907191       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-l6zzp\": pod kindnet-l6zzp is already assigned to node \"ha-565925-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-l6zzp" node="ha-565925-m03"
	E0610 10:40:56.907263       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-l6zzp\": pod kindnet-l6zzp is already assigned to node \"ha-565925-m03\"" pod="kube-system/kindnet-l6zzp"
	I0610 10:41:21.202401       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="53b82f36-f185-4980-9722-bfd952e91286" pod="default/busybox-fc5497c4f-8g67g" assumedNode="ha-565925-m02" currentNode="ha-565925-m03"
	E0610 10:41:21.213967       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8g67g\": pod busybox-fc5497c4f-8g67g is already assigned to node \"ha-565925-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-8g67g" node="ha-565925-m03"
	E0610 10:41:21.214050       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 53b82f36-f185-4980-9722-bfd952e91286(default/busybox-fc5497c4f-8g67g) was assumed on ha-565925-m03 but assigned to ha-565925-m02" pod="default/busybox-fc5497c4f-8g67g"
	E0610 10:41:21.214076       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-8g67g\": pod busybox-fc5497c4f-8g67g is already assigned to node \"ha-565925-m02\"" pod="default/busybox-fc5497c4f-8g67g"
	I0610 10:41:21.214097       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-8g67g" node="ha-565925-m02"
	E0610 10:41:21.261604       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6wmkd\": pod busybox-fc5497c4f-6wmkd is already assigned to node \"ha-565925\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-6wmkd" node="ha-565925"
	E0610 10:41:21.261683       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-6wmkd\": pod busybox-fc5497c4f-6wmkd is already assigned to node \"ha-565925\"" pod="default/busybox-fc5497c4f-6wmkd"
	E0610 10:41:58.751185       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hr7qn\": pod kube-proxy-hr7qn is already assigned to node \"ha-565925-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hr7qn" node="ha-565925-m04"
	E0610 10:41:58.751457       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3bb3dab4-2341-44cc-b41f-4333e4bb1138(kube-system/kube-proxy-hr7qn) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-hr7qn"
	E0610 10:41:58.751511       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hr7qn\": pod kube-proxy-hr7qn is already assigned to node \"ha-565925-m04\"" pod="kube-system/kube-proxy-hr7qn"
	I0610 10:41:58.751611       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hr7qn" node="ha-565925-m04"
	E0610 10:41:58.753913       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lkf5b\": pod kindnet-lkf5b is already assigned to node \"ha-565925-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-lkf5b" node="ha-565925-m04"
	E0610 10:41:58.754717       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 087be749-ed61-402c-86cf-ccf5bc66b9f9(kube-system/kindnet-lkf5b) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lkf5b"
	E0610 10:41:58.756692       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lkf5b\": pod kindnet-lkf5b is already assigned to node \"ha-565925-m04\"" pod="kube-system/kindnet-lkf5b"
	I0610 10:41:58.756887       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lkf5b" node="ha-565925-m04"
	
	
	==> kubelet <==
	Jun 10 10:41:30 ha-565925 kubelet[1367]: E0610 10:41:30.828972    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:41:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:41:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:41:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:41:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:42:30 ha-565925 kubelet[1367]: E0610 10:42:30.827923    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:42:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:42:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:42:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:42:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:43:30 ha-565925 kubelet[1367]: E0610 10:43:30.834914    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:43:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:43:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:43:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:43:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:44:30 ha-565925 kubelet[1367]: E0610 10:44:30.828865    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:44:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:44:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:44:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:44:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:45:30 ha-565925 kubelet[1367]: E0610 10:45:30.828151    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:45:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:45:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:45:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:45:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565925 -n ha-565925
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (61.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-565925 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-565925 -v=7 --alsologtostderr
E0610 10:46:57.913977   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:47:25.598160   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-565925 -v=7 --alsologtostderr: exit status 82 (2m1.896055382s)

                                                
                                                
-- stdout --
	* Stopping node "ha-565925-m04"  ...
	* Stopping node "ha-565925-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:45:56.808977   27679 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:45:56.809255   27679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:56.809265   27679 out.go:304] Setting ErrFile to fd 2...
	I0610 10:45:56.809270   27679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:45:56.809445   27679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:45:56.809663   27679 out.go:298] Setting JSON to false
	I0610 10:45:56.809748   27679 mustload.go:65] Loading cluster: ha-565925
	I0610 10:45:56.810103   27679 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:45:56.810187   27679 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:45:56.810355   27679 mustload.go:65] Loading cluster: ha-565925
	I0610 10:45:56.810482   27679 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:45:56.810510   27679 stop.go:39] StopHost: ha-565925-m04
	I0610 10:45:56.810873   27679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:56.810919   27679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:56.825576   27679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0610 10:45:56.826093   27679 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:56.826631   27679 main.go:141] libmachine: Using API Version  1
	I0610 10:45:56.826661   27679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:56.827006   27679 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:56.829474   27679 out.go:177] * Stopping node "ha-565925-m04"  ...
	I0610 10:45:56.830806   27679 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0610 10:45:56.830833   27679 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:45:56.831102   27679 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0610 10:45:56.831138   27679 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:45:56.834452   27679 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:56.834903   27679 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:41:43 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:45:56.834956   27679 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:45:56.835153   27679 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:45:56.835341   27679 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:45:56.835505   27679 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:45:56.835661   27679 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	I0610 10:45:56.919003   27679 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0610 10:45:56.971795   27679 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0610 10:45:57.025966   27679 main.go:141] libmachine: Stopping "ha-565925-m04"...
	I0610 10:45:57.026006   27679 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:45:57.027683   27679 main.go:141] libmachine: (ha-565925-m04) Calling .Stop
	I0610 10:45:57.031018   27679 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 0/120
	I0610 10:45:58.231357   27679 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:45:58.232720   27679 main.go:141] libmachine: Machine "ha-565925-m04" was stopped.
	I0610 10:45:58.232736   27679 stop.go:75] duration metric: took 1.401932339s to stop
	I0610 10:45:58.232766   27679 stop.go:39] StopHost: ha-565925-m03
	I0610 10:45:58.233096   27679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:45:58.233135   27679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:45:58.248332   27679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0610 10:45:58.248851   27679 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:45:58.249466   27679 main.go:141] libmachine: Using API Version  1
	I0610 10:45:58.249494   27679 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:45:58.249838   27679 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:45:58.251677   27679 out.go:177] * Stopping node "ha-565925-m03"  ...
	I0610 10:45:58.252928   27679 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0610 10:45:58.252967   27679 main.go:141] libmachine: (ha-565925-m03) Calling .DriverName
	I0610 10:45:58.253201   27679 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0610 10:45:58.253229   27679 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHHostname
	I0610 10:45:58.256827   27679 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:58.257344   27679 main.go:141] libmachine: (ha-565925-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:67:38", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:40:19 +0000 UTC Type:0 Mac:52:54:00:cf:67:38 Iaid: IPaddr:192.168.39.76 Prefix:24 Hostname:ha-565925-m03 Clientid:01:52:54:00:cf:67:38}
	I0610 10:45:58.257373   27679 main.go:141] libmachine: (ha-565925-m03) DBG | domain ha-565925-m03 has defined IP address 192.168.39.76 and MAC address 52:54:00:cf:67:38 in network mk-ha-565925
	I0610 10:45:58.257514   27679 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHPort
	I0610 10:45:58.257682   27679 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHKeyPath
	I0610 10:45:58.257833   27679 main.go:141] libmachine: (ha-565925-m03) Calling .GetSSHUsername
	I0610 10:45:58.257967   27679 sshutil.go:53] new ssh client: &{IP:192.168.39.76 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m03/id_rsa Username:docker}
	I0610 10:45:58.343220   27679 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0610 10:45:58.396251   27679 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0610 10:45:58.454587   27679 main.go:141] libmachine: Stopping "ha-565925-m03"...
	I0610 10:45:58.454611   27679 main.go:141] libmachine: (ha-565925-m03) Calling .GetState
	I0610 10:45:58.456074   27679 main.go:141] libmachine: (ha-565925-m03) Calling .Stop
	I0610 10:45:58.459775   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 0/120
	I0610 10:45:59.461191   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 1/120
	I0610 10:46:00.462575   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 2/120
	I0610 10:46:01.464234   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 3/120
	I0610 10:46:02.465863   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 4/120
	I0610 10:46:03.467904   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 5/120
	I0610 10:46:04.469300   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 6/120
	I0610 10:46:05.470836   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 7/120
	I0610 10:46:06.472218   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 8/120
	I0610 10:46:07.473963   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 9/120
	I0610 10:46:08.476272   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 10/120
	I0610 10:46:09.477921   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 11/120
	I0610 10:46:10.479769   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 12/120
	I0610 10:46:11.481511   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 13/120
	I0610 10:46:12.483277   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 14/120
	I0610 10:46:13.485586   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 15/120
	I0610 10:46:14.486975   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 16/120
	I0610 10:46:15.488805   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 17/120
	I0610 10:46:16.490702   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 18/120
	I0610 10:46:17.492058   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 19/120
	I0610 10:46:18.494014   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 20/120
	I0610 10:46:19.495588   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 21/120
	I0610 10:46:20.496939   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 22/120
	I0610 10:46:21.498470   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 23/120
	I0610 10:46:22.499933   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 24/120
	I0610 10:46:23.502271   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 25/120
	I0610 10:46:24.503870   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 26/120
	I0610 10:46:25.505447   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 27/120
	I0610 10:46:26.506732   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 28/120
	I0610 10:46:27.508210   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 29/120
	I0610 10:46:28.509863   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 30/120
	I0610 10:46:29.511667   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 31/120
	I0610 10:46:30.512969   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 32/120
	I0610 10:46:31.514301   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 33/120
	I0610 10:46:32.515479   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 34/120
	I0610 10:46:33.517252   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 35/120
	I0610 10:46:34.519553   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 36/120
	I0610 10:46:35.520989   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 37/120
	I0610 10:46:36.522415   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 38/120
	I0610 10:46:37.523897   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 39/120
	I0610 10:46:38.525924   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 40/120
	I0610 10:46:39.527366   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 41/120
	I0610 10:46:40.529229   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 42/120
	I0610 10:46:41.531717   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 43/120
	I0610 10:46:42.533264   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 44/120
	I0610 10:46:43.535283   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 45/120
	I0610 10:46:44.536793   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 46/120
	I0610 10:46:45.538118   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 47/120
	I0610 10:46:46.540621   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 48/120
	I0610 10:46:47.541946   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 49/120
	I0610 10:46:48.544201   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 50/120
	I0610 10:46:49.545641   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 51/120
	I0610 10:46:50.547361   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 52/120
	I0610 10:46:51.549001   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 53/120
	I0610 10:46:52.550359   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 54/120
	I0610 10:46:53.552261   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 55/120
	I0610 10:46:54.553655   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 56/120
	I0610 10:46:55.555224   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 57/120
	I0610 10:46:56.556683   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 58/120
	I0610 10:46:57.558277   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 59/120
	I0610 10:46:58.560780   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 60/120
	I0610 10:46:59.562344   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 61/120
	I0610 10:47:00.563985   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 62/120
	I0610 10:47:01.565482   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 63/120
	I0610 10:47:02.566890   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 64/120
	I0610 10:47:03.568256   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 65/120
	I0610 10:47:04.569804   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 66/120
	I0610 10:47:05.570967   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 67/120
	I0610 10:47:06.572536   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 68/120
	I0610 10:47:07.573855   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 69/120
	I0610 10:47:08.575729   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 70/120
	I0610 10:47:09.577119   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 71/120
	I0610 10:47:10.578607   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 72/120
	I0610 10:47:11.580019   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 73/120
	I0610 10:47:12.581370   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 74/120
	I0610 10:47:13.583196   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 75/120
	I0610 10:47:14.584649   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 76/120
	I0610 10:47:15.586096   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 77/120
	I0610 10:47:16.587425   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 78/120
	I0610 10:47:17.589814   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 79/120
	I0610 10:47:18.591387   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 80/120
	I0610 10:47:19.592763   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 81/120
	I0610 10:47:20.595223   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 82/120
	I0610 10:47:21.596752   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 83/120
	I0610 10:47:22.598118   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 84/120
	I0610 10:47:23.599971   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 85/120
	I0610 10:47:24.601210   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 86/120
	I0610 10:47:25.603374   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 87/120
	I0610 10:47:26.604666   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 88/120
	I0610 10:47:27.606040   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 89/120
	I0610 10:47:28.608773   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 90/120
	I0610 10:47:29.610065   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 91/120
	I0610 10:47:30.612475   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 92/120
	I0610 10:47:31.613609   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 93/120
	I0610 10:47:32.615137   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 94/120
	I0610 10:47:33.616619   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 95/120
	I0610 10:47:34.618257   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 96/120
	I0610 10:47:35.620377   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 97/120
	I0610 10:47:36.621706   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 98/120
	I0610 10:47:37.622975   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 99/120
	I0610 10:47:38.624667   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 100/120
	I0610 10:47:39.626056   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 101/120
	I0610 10:47:40.627551   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 102/120
	I0610 10:47:41.629346   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 103/120
	I0610 10:47:42.630801   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 104/120
	I0610 10:47:43.632876   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 105/120
	I0610 10:47:44.635077   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 106/120
	I0610 10:47:45.636384   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 107/120
	I0610 10:47:46.638232   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 108/120
	I0610 10:47:47.639817   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 109/120
	I0610 10:47:48.641221   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 110/120
	I0610 10:47:49.643231   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 111/120
	I0610 10:47:50.644612   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 112/120
	I0610 10:47:51.646197   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 113/120
	I0610 10:47:52.647802   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 114/120
	I0610 10:47:53.649593   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 115/120
	I0610 10:47:54.650961   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 116/120
	I0610 10:47:55.652770   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 117/120
	I0610 10:47:56.654215   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 118/120
	I0610 10:47:57.655626   27679 main.go:141] libmachine: (ha-565925-m03) Waiting for machine to stop 119/120
	I0610 10:47:58.656607   27679 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0610 10:47:58.656672   27679 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0610 10:47:58.658799   27679 out.go:177] 
	W0610 10:47:58.660152   27679 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0610 10:47:58.660166   27679 out.go:239] * 
	* 
	W0610 10:47:58.662419   27679 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:47:58.663874   27679 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-565925 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-565925 --wait=true -v=7 --alsologtostderr
E0610 10:49:12.453715   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:50:35.498215   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:51:57.913862   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-565925 --wait=true -v=7 --alsologtostderr: (4m0.64751189s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-565925
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565925 -n ha-565925
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565925 logs -n 25: (1.80417052s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m02:/home/docker/cp-test_ha-565925-m03_ha-565925-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m02 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04:/home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m04 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp testdata/cp-test.txt                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1107448961/001/cp-test_ha-565925-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925:/home/docker/cp-test_ha-565925-m04_ha-565925.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925 sudo cat                                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m02:/home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m02 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03:/home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m03 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565925 node stop m02 -v=7                                                     | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-565925 node start m02 -v=7                                                    | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565925 -v=7                                                           | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-565925 -v=7                                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-565925 --wait=true -v=7                                                    | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:47 UTC | 10 Jun 24 10:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565925                                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:51 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:47:58
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:47:58.708897   28147 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:47:58.709187   28147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:47:58.709198   28147 out.go:304] Setting ErrFile to fd 2...
	I0610 10:47:58.709205   28147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:47:58.709390   28147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:47:58.709943   28147 out.go:298] Setting JSON to false
	I0610 10:47:58.710862   28147 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1820,"bootTime":1718014659,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:47:58.710921   28147 start.go:139] virtualization: kvm guest
	I0610 10:47:58.713146   28147 out.go:177] * [ha-565925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:47:58.714611   28147 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:47:58.714644   28147 notify.go:220] Checking for updates...
	I0610 10:47:58.715823   28147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:47:58.717146   28147 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:47:58.718541   28147 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:47:58.719976   28147 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 10:47:58.721456   28147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:47:58.723255   28147 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:47:58.723402   28147 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:47:58.723851   28147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:47:58.723892   28147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:47:58.738873   28147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34949
	I0610 10:47:58.739425   28147 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:47:58.740014   28147 main.go:141] libmachine: Using API Version  1
	I0610 10:47:58.740033   28147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:47:58.740446   28147 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:47:58.740622   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:47:58.779189   28147 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 10:47:58.780705   28147 start.go:297] selected driver: kvm2
	I0610 10:47:58.780720   28147 start.go:901] validating driver "kvm2" against &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:47:58.780863   28147 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:47:58.781229   28147 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:47:58.781314   28147 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 10:47:58.797812   28147 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 10:47:58.798474   28147 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:47:58.798529   28147 cni.go:84] Creating CNI manager for ""
	I0610 10:47:58.798544   28147 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0610 10:47:58.798592   28147 start.go:340] cluster config:
	{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:47:58.798739   28147 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:47:58.800716   28147 out.go:177] * Starting "ha-565925" primary control-plane node in "ha-565925" cluster
	I0610 10:47:58.801916   28147 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:47:58.801957   28147 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 10:47:58.801970   28147 cache.go:56] Caching tarball of preloaded images
	I0610 10:47:58.802042   28147 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:47:58.802058   28147 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:47:58.802217   28147 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:47:58.802503   28147 start.go:360] acquireMachinesLock for ha-565925: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:47:58.802557   28147 start.go:364] duration metric: took 29.094µs to acquireMachinesLock for "ha-565925"
	I0610 10:47:58.802575   28147 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:47:58.802582   28147 fix.go:54] fixHost starting: 
	I0610 10:47:58.802985   28147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:47:58.803018   28147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:47:58.817675   28147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44641
	I0610 10:47:58.818048   28147 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:47:58.818536   28147 main.go:141] libmachine: Using API Version  1
	I0610 10:47:58.818558   28147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:47:58.818872   28147 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:47:58.819075   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:47:58.819271   28147 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:47:58.820931   28147 fix.go:112] recreateIfNeeded on ha-565925: state=Running err=<nil>
	W0610 10:47:58.820976   28147 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 10:47:58.822954   28147 out.go:177] * Updating the running kvm2 "ha-565925" VM ...
	I0610 10:47:58.824299   28147 machine.go:94] provisionDockerMachine start ...
	I0610 10:47:58.824317   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:47:58.824499   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:58.826736   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:58.827337   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:58.827367   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:58.827517   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:47:58.827684   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:58.827830   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:58.827947   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:47:58.828090   28147 main.go:141] libmachine: Using SSH client type: native
	I0610 10:47:58.828251   28147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:47:58.828262   28147 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 10:47:58.943615   28147 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:47:58.943647   28147 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:47:58.943887   28147 buildroot.go:166] provisioning hostname "ha-565925"
	I0610 10:47:58.943923   28147 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:47:58.944150   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:58.947121   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:58.947504   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:58.947533   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:58.947778   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:47:58.947967   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:58.948149   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:58.948296   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:47:58.948437   28147 main.go:141] libmachine: Using SSH client type: native
	I0610 10:47:58.948594   28147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:47:58.948605   28147 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925 && echo "ha-565925" | sudo tee /etc/hostname
	I0610 10:47:59.080167   28147 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:47:59.080208   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:59.083486   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.083903   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:59.083927   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.084162   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:47:59.084535   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:59.084744   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:59.084891   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:47:59.085053   28147 main.go:141] libmachine: Using SSH client type: native
	I0610 10:47:59.085203   28147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:47:59.085218   28147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:47:59.201319   28147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:47:59.201355   28147 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:47:59.201383   28147 buildroot.go:174] setting up certificates
	I0610 10:47:59.201395   28147 provision.go:84] configureAuth start
	I0610 10:47:59.201409   28147 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:47:59.201698   28147 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:47:59.204168   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.204526   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:59.204553   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.204725   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:59.207058   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.207444   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:59.207474   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.207581   28147 provision.go:143] copyHostCerts
	I0610 10:47:59.207609   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:47:59.207668   28147 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 10:47:59.207680   28147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:47:59.207778   28147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:47:59.207880   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:47:59.207905   28147 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 10:47:59.207913   28147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:47:59.207954   28147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:47:59.208023   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:47:59.208045   28147 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 10:47:59.208051   28147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:47:59.208087   28147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:47:59.208150   28147 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925 san=[127.0.0.1 192.168.39.208 ha-565925 localhost minikube]
	I0610 10:47:59.405927   28147 provision.go:177] copyRemoteCerts
	I0610 10:47:59.405999   28147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:47:59.406026   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:59.408573   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.408982   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:59.409022   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.409198   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:47:59.409378   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:59.409520   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:47:59.409666   28147 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:47:59.494928   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 10:47:59.495002   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:47:59.518217   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 10:47:59.518270   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0610 10:47:59.541448   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 10:47:59.541506   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 10:47:59.565344   28147 provision.go:87] duration metric: took 363.937855ms to configureAuth
	I0610 10:47:59.565375   28147 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:47:59.565629   28147 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:47:59.565708   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:59.568281   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.568606   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:59.568629   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.568853   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:47:59.569080   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:59.569269   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:59.569423   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:47:59.569570   28147 main.go:141] libmachine: Using SSH client type: native
	I0610 10:47:59.569748   28147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:47:59.569764   28147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:49:30.471958   28147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:49:30.471982   28147 machine.go:97] duration metric: took 1m31.647670075s to provisionDockerMachine
	I0610 10:49:30.471995   28147 start.go:293] postStartSetup for "ha-565925" (driver="kvm2")
	I0610 10:49:30.472006   28147 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:49:30.472027   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.472334   28147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:49:30.472360   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:49:30.475326   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.475751   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.475781   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.475882   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:49:30.476085   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.476267   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:49:30.476408   28147 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:49:30.564496   28147 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:49:30.568673   28147 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:49:30.568692   28147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:49:30.568759   28147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:49:30.568839   28147 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 10:49:30.568852   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 10:49:30.568998   28147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 10:49:30.578028   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:49:30.603638   28147 start.go:296] duration metric: took 131.631226ms for postStartSetup
	I0610 10:49:30.603677   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.603973   28147 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0610 10:49:30.604005   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:49:30.606663   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.607104   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.607142   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.607275   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:49:30.607487   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.607648   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:49:30.607777   28147 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	W0610 10:49:30.690454   28147 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0610 10:49:30.690477   28147 fix.go:56] duration metric: took 1m31.887897095s for fixHost
	I0610 10:49:30.690503   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:49:30.693351   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.693726   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.693748   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.693922   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:49:30.694113   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.694245   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.694394   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:49:30.694600   28147 main.go:141] libmachine: Using SSH client type: native
	I0610 10:49:30.694756   28147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:49:30.694766   28147 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:49:30.805822   28147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718016570.778522855
	
	I0610 10:49:30.805847   28147 fix.go:216] guest clock: 1718016570.778522855
	I0610 10:49:30.805855   28147 fix.go:229] Guest: 2024-06-10 10:49:30.778522855 +0000 UTC Remote: 2024-06-10 10:49:30.690484826 +0000 UTC m=+92.017151784 (delta=88.038029ms)
	I0610 10:49:30.805881   28147 fix.go:200] guest clock delta is within tolerance: 88.038029ms
	I0610 10:49:30.805887   28147 start.go:83] releasing machines lock for "ha-565925", held for 1m32.00331847s
	I0610 10:49:30.805918   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.806303   28147 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:49:30.809325   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.809764   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.809791   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.810176   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.810756   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.810934   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.811026   28147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:49:30.811074   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:49:30.811122   28147 ssh_runner.go:195] Run: cat /version.json
	I0610 10:49:30.811146   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:49:30.813763   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.814018   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.814098   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.814123   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.814236   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:49:30.814387   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.814414   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.814531   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.814588   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:49:30.814728   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:49:30.814740   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.814942   28147 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:49:30.814952   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:49:30.815107   28147 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:49:30.894456   28147 ssh_runner.go:195] Run: systemctl --version
	I0610 10:49:30.927052   28147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:49:31.088085   28147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:49:31.094166   28147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:49:31.094235   28147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:49:31.103504   28147 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0610 10:49:31.103530   28147 start.go:494] detecting cgroup driver to use...
	I0610 10:49:31.103594   28147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:49:31.121117   28147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:49:31.134460   28147 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:49:31.134509   28147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:49:31.147585   28147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:49:31.160577   28147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:49:31.324758   28147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:49:31.475301   28147 docker.go:233] disabling docker service ...
	I0610 10:49:31.475379   28147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:49:31.494201   28147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:49:31.507130   28147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:49:31.661611   28147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:49:31.816894   28147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:49:31.830851   28147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:49:31.847906   28147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:49:31.847974   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.857865   28147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:49:31.857935   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.868261   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.877889   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.887524   28147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:49:31.897571   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.907284   28147 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.917421   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.927430   28147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:49:31.936253   28147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:49:31.945062   28147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:49:32.082304   28147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:49:32.348379   28147 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:49:32.348449   28147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:49:32.354030   28147 start.go:562] Will wait 60s for crictl version
	I0610 10:49:32.354083   28147 ssh_runner.go:195] Run: which crictl
	I0610 10:49:32.357517   28147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:49:32.389450   28147 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:49:32.389521   28147 ssh_runner.go:195] Run: crio --version
	I0610 10:49:32.416981   28147 ssh_runner.go:195] Run: crio --version
	I0610 10:49:32.446480   28147 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:49:32.447840   28147 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:49:32.450727   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:32.451100   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:32.451120   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:32.451315   28147 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:49:32.455822   28147 kubeadm.go:877] updating cluster {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 10:49:32.455952   28147 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:49:32.455990   28147 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:49:32.501795   28147 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:49:32.501822   28147 crio.go:433] Images already preloaded, skipping extraction
	I0610 10:49:32.501869   28147 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:49:32.534694   28147 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:49:32.534718   28147 cache_images.go:84] Images are preloaded, skipping loading
	I0610 10:49:32.534727   28147 kubeadm.go:928] updating node { 192.168.39.208 8443 v1.30.1 crio true true} ...
	I0610 10:49:32.534838   28147 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:49:32.534917   28147 ssh_runner.go:195] Run: crio config
	I0610 10:49:32.585114   28147 cni.go:84] Creating CNI manager for ""
	I0610 10:49:32.585133   28147 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0610 10:49:32.585142   28147 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 10:49:32.585159   28147 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565925 NodeName:ha-565925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 10:49:32.585287   28147 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 10:49:32.585304   28147 kube-vip.go:115] generating kube-vip config ...
	I0610 10:49:32.585340   28147 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 10:49:32.596225   28147 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 10:49:32.596343   28147 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 10:49:32.596393   28147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:49:32.605608   28147 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 10:49:32.605678   28147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0610 10:49:32.614853   28147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0610 10:49:32.630679   28147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:49:32.645938   28147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0610 10:49:32.661440   28147 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 10:49:32.678721   28147 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 10:49:32.682373   28147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:49:32.819983   28147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:49:32.834903   28147 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.208
	I0610 10:49:32.834933   28147 certs.go:194] generating shared ca certs ...
	I0610 10:49:32.834954   28147 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:49:32.835127   28147 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:49:32.835184   28147 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:49:32.835199   28147 certs.go:256] generating profile certs ...
	I0610 10:49:32.835311   28147 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 10:49:32.835347   28147 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.61a86681
	I0610 10:49:32.835364   28147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.61a86681 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.230 192.168.39.76 192.168.39.254]
	I0610 10:49:33.111005   28147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.61a86681 ...
	I0610 10:49:33.111036   28147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.61a86681: {Name:mka6c1e364cfae37b6f112e6f3f1aa66ca53ce26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:49:33.111199   28147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.61a86681 ...
	I0610 10:49:33.111210   28147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.61a86681: {Name:mke87619ceb9a196226e8ca7401c9b9faf1c2460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:49:33.111287   28147 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.61a86681 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 10:49:33.111436   28147 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.61a86681 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 10:49:33.111556   28147 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 10:49:33.111570   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:49:33.111587   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:49:33.111601   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:49:33.111614   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:49:33.111626   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:49:33.111636   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:49:33.111648   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:49:33.111661   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:49:33.111708   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 10:49:33.111732   28147 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 10:49:33.111741   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:49:33.111761   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:49:33.111798   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:49:33.111826   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:49:33.111861   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:49:33.111887   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 10:49:33.111900   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:49:33.111911   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 10:49:33.112511   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:49:33.137337   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:49:33.183120   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:49:33.319014   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:49:33.576256   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0610 10:49:33.837107   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 10:49:33.991745   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:49:34.074155   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 10:49:34.324284   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 10:49:34.425934   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:49:34.478996   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 10:49:34.570588   28147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 10:49:34.603381   28147 ssh_runner.go:195] Run: openssl version
	I0610 10:49:34.611830   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:49:34.624737   28147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:49:34.629534   28147 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:49:34.629593   28147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:49:34.639439   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:49:34.656478   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 10:49:34.669622   28147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 10:49:34.674096   28147 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 10:49:34.674137   28147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 10:49:34.682665   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 10:49:34.694621   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 10:49:34.707206   28147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 10:49:34.711553   28147 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 10:49:34.711614   28147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 10:49:34.717198   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:49:34.728347   28147 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:49:34.732904   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 10:49:34.738974   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 10:49:34.744571   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 10:49:34.751126   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 10:49:34.756784   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 10:49:34.763044   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 10:49:34.768470   28147 kubeadm.go:391] StartCluster: {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:49:34.768596   28147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 10:49:34.768658   28147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 10:49:34.853868   28147 cri.go:89] found id: "6d2fc31bedad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47"
	I0610 10:49:34.853894   28147 cri.go:89] found id: "0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b04258e36921b56cf5"
	I0610 10:49:34.853900   28147 cri.go:89] found id: "d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566"
	I0610 10:49:34.853905   28147 cri.go:89] found id: "ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780"
	I0610 10:49:34.853909   28147 cri.go:89] found id: "d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1"
	I0610 10:49:34.853914   28147 cri.go:89] found id: "a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5"
	I0610 10:49:34.853918   28147 cri.go:89] found id: "10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef"
	I0610 10:49:34.853922   28147 cri.go:89] found id: "a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627"
	I0610 10:49:34.853926   28147 cri.go:89] found id: "6a79c08b543bef005daee1e3690fb18317e89ed3a172dcf8fb66dde1d4969fce"
	I0610 10:49:34.853932   28147 cri.go:89] found id: "a0419ef3f2987d9b8cc906b403eddc48694d814716bf8747432c935276cbaf0b"
	I0610 10:49:34.853936   28147 cri.go:89] found id: "b4e9d0b36913d4db0e9450807b1045c3be90511dfa172cd0b480a4042852bb2e"
	I0610 10:49:34.853940   28147 cri.go:89] found id: "bc4df07252fb45872d41728c3386619b228ccc7df4253b6852eb5655c1661866"
	I0610 10:49:34.853943   28147 cri.go:89] found id: "1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163"
	I0610 10:49:34.853949   28147 cri.go:89] found id: "534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f"
	I0610 10:49:34.853955   28147 cri.go:89] found id: "fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91"
	I0610 10:49:34.853963   28147 cri.go:89] found id: "538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82"
	I0610 10:49:34.853967   28147 cri.go:89] found id: "15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd"
	I0610 10:49:34.853976   28147 cri.go:89] found id: "bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496"
	I0610 10:49:34.853980   28147 cri.go:89] found id: ""
	I0610 10:49:34.854033   28147 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.102938952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b7b8ae2d-05ec-4a90-be64-dab6a0c4ef88 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.104790250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7acd4184-a581-400a-88e5-f8c4d132c5fa name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.105308700Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016720105279605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7acd4184-a581-400a-88e5-f8c4d132c5fa name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.105999054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9aecd206-9198-44d8-9b3a-b9441b3dea0c name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.106058602Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9aecd206-9198-44d8-9b3a-b9441b3dea0c name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.106927831Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3d97fac6-3ca8-49e7-9e7b-b13d737eea34 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.107240477Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-6wmkd,Uid:f8a1e0dc-e561-4def-9787-c5d0eda08fda,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718016606989447824,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:41:21.254050246Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-565925,Uid:5b7f7bf516814f2c5dbe0fbc6daa3a18,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1718016585696564054,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{kubernetes.io/config.hash: 5b7f7bf516814f2c5dbe0fbc6daa3a18,kubernetes.io/config.seen: 2024-06-10T10:49:32.651608997Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-545cf,Uid:7564efde-b96c-48b3-b194-bca695f7ae95,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718016573337503685,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06
-10T10:38:49.597228433Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wn6nh,Uid:9e47f047-e98b-48c8-8a33-8f790a3e8017,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718016573321801242,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:49.589282044Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&PodSandboxMetadata{Name:kindnet-rnn59,Uid:9141e131-eebc-4f51-8b55-46ff649ffaee,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718016573311914565,Labels:map[string]string{app:
kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:44.065979711Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-565925,Uid:0160bc841c85a002ebb521cea7065bc7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718016573309292584,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0160bc841c85a002ebb521cea7065bc7,kuber
netes.io/config.seen: 2024-06-10T10:38:30.793997530Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&PodSandboxMetadata{Name:etcd-ha-565925,Uid:24c16c67f513f809f76a7bbd749e01f3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718016573303434210,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.208:2379,kubernetes.io/config.hash: 24c16c67f513f809f76a7bbd749e01f3,kubernetes.io/config.seen: 2024-06-10T10:38:30.793999653Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&PodSandboxMetadata{Name:kube-proxy-wdjhn,Uid:da3ac11b-0906-4695-
80b1-f3f4f1a34de1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718016573242462220,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:44.034881743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-565925,Uid:d811c4cb2aa091785cd31dce6f7bed4f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718016573233220536,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2
aa091785cd31dce6f7bed4f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d811c4cb2aa091785cd31dce6f7bed4f,kubernetes.io/config.seen: 2024-06-10T10:38:30.793996164Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-565925,Uid:12d1dab5f9db3366c19df7ea45438b14,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718016573193551367,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.208:8443,kubernetes.io/config.hash: 12d1dab5f9db3366c19df7ea45438b14,kubernetes.io/config.seen: 2024-06-10T10:38:30.793992583Z,kubernetes.io/config.source: f
ile,},RuntimeHandler:,},&PodSandbox{Id:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0ca60a36-c445-4520-b857-7df39dfed848,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718016573190730499,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imag
ePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-10T10:38:49.603428236Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-6wmkd,Uid:f8a1e0dc-e561-4def-9787-c5d0eda08fda,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718016081571821297,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:41:21.254050246Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-545cf,Uid:7564efde-b96c-48b3-b194-bca695f7ae95,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718015929906528064,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:49.597228433Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wn6nh,Uid:9e47f047-e98b-48c8-8a33-8f790a3e8017,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718015929897339287,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:49.589282044Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&PodSandboxMetadata{Name:kube-proxy-wdjhn,Uid:da3ac11b-0906-4695-80b1-f3f4f1a34de1,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718015924947847763,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:44.034881743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-565925,Uid:0160bc841c85a002ebb521cea7065bc7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718015904389869820,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0160bc841c85a002ebb521cea7065bc7,kubernetes.io/config.seen: 2024-06-10T10:38:23.929498825Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&PodSandboxMetadata{Name:etcd-ha-565925,Uid:24c16c67f513f809f76a7bbd749e01f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718015904383823165,Labels:map[string]string{component: etcd,io.kubernetes
.container.name: POD,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.208:2379,kubernetes.io/config.hash: 24c16c67f513f809f76a7bbd749e01f3,kubernetes.io/config.seen: 2024-06-10T10:38:23.929492639Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3d97fac6-3ca8-49e7-9e7b-b13d737eea34 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.108340814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=913ddd5a-56cc-4f89-bdd3-ce55382eae33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.108396053Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=913ddd5a-56cc-4f89-bdd3-ce55382eae33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.108820591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718016655830984097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f42a3959512141305a423acbd9e3651a0d52b5082c682b258cd4164bf4c8e22,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718016651830324024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718016615822358433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718016612826583803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718016610822393036,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016607125159409,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718016585794439061,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718016573870529543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2fc31b
edad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718016574022181270,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c409
3e9b04258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573906704187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718016573751897701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718016573784410334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573866821840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\
",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718016573678086400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718016573705740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kuber
netes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016084446209177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930175667446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930144918315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718015925064910752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718015904609428104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718015904613266630,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9aecd206-9198-44d8-9b3a-b9441b3dea0c name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.112273919Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718016655830984097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f42a3959512141305a423acbd9e3651a0d52b5082c682b258cd4164bf4c8e22,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718016651830324024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718016615822358433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718016612826583803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718016610822393036,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016607125159409,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718016585794439061,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718016573870529543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2fc31b
edad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718016574022181270,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c409
3e9b04258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573906704187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718016573751897701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718016573784410334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573866821840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\
",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718016573678086400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718016573705740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kuber
netes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016084446209177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930175667446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930144918315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718015925064910752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718015904609428104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718015904613266630,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=913ddd5a-56cc-4f89-bdd3-ce55382eae33 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.165156884Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36d23d3e-ce30-45e7-b313-8172139a7571 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.165259893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36d23d3e-ce30-45e7-b313-8172139a7571 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.167417532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48fd6947-46db-485b-af4c-688973b6405e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.168024646Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016720167999254,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48fd6947-46db-485b-af4c-688973b6405e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.169156719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0d99754-0888-45e1-a5cd-a9a33eedf198 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.169260260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0d99754-0888-45e1-a5cd-a9a33eedf198 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.169855962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718016655830984097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f42a3959512141305a423acbd9e3651a0d52b5082c682b258cd4164bf4c8e22,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718016651830324024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718016615822358433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718016612826583803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718016610822393036,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016607125159409,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718016585794439061,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718016573870529543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2fc31b
edad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718016574022181270,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c409
3e9b04258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573906704187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718016573751897701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718016573784410334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573866821840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\
",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718016573678086400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718016573705740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kuber
netes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016084446209177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930175667446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930144918315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718015925064910752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718015904609428104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718015904613266630,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0d99754-0888-45e1-a5cd-a9a33eedf198 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.213268897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a69e1957-0fd7-47b9-9043-d628b398ae81 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.213346554Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a69e1957-0fd7-47b9-9043-d628b398ae81 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.214597361Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e418f924-8ec9-4bfc-913f-c7a3f86e1bfe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.215156770Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016720215131198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e418f924-8ec9-4bfc-913f-c7a3f86e1bfe name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.215825526Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93dcc8fe-03e8-4056-afcb-e6257059ac09 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.215932490Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93dcc8fe-03e8-4056-afcb-e6257059ac09 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:52:00 ha-565925 crio[3904]: time="2024-06-10 10:52:00.216363162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718016655830984097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f42a3959512141305a423acbd9e3651a0d52b5082c682b258cd4164bf4c8e22,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718016651830324024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718016615822358433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718016612826583803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718016610822393036,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016607125159409,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718016585794439061,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718016573870529543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2fc31b
edad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718016574022181270,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c409
3e9b04258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573906704187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718016573751897701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718016573784410334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573866821840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\
",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718016573678086400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718016573705740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kuber
netes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016084446209177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930175667446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930144918315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718015925064910752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718015904609428104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718015904613266630,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93dcc8fe-03e8-4056-afcb-e6257059ac09 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	30454a419886c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               4                   1322b1eb5b92d       kindnet-rnn59
	3f42a39595121       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   768cff5363857       storage-provisioner
	895531b30d084       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            3                   4e5f5234f2b8f       kube-apiserver-ha-565925
	ba05d1801bbb5       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   2                   14111cba76dba       kube-controller-manager-ha-565925
	18be5875f033d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   768cff5363857       storage-provisioner
	51e293a1cc869       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   276099ec692d5       busybox-fc5497c4f-6wmkd
	031c3214a1818       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   cfe7af207d454       kube-vip-ha-565925
	6d2fc31bedad8       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      2 minutes ago        Exited              kindnet-cni               3                   1322b1eb5b92d       kindnet-rnn59
	0a358cc1cc573       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   3afe7674416b2       coredns-7db6d8ff4d-wn6nh
	d6b392205cc4d       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      2 minutes ago        Running             kube-proxy                1                   92b6f53b325e0       kube-proxy-wdjhn
	ca1b692a8aa8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   d74bbdd47986b       coredns-7db6d8ff4d-545cf
	d73c4fbf16547       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      2 minutes ago        Running             kube-scheduler            1                   d3e905f6d61a7       kube-scheduler-ha-565925
	a51d5bffe5db4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   38fe7da9f5e49       etcd-ha-565925
	10ce07d12f096       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago        Exited              kube-apiserver            2                   4e5f5234f2b8f       kube-apiserver-ha-565925
	a35ae66a1bbe3       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago        Exited              kube-controller-manager   1                   14111cba76dba       kube-controller-manager-ha-565925
	e2874c04d7e60       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   4f03a24f1c978       busybox-fc5497c4f-6wmkd
	1f037e4537f61       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   937195f055767       coredns-7db6d8ff4d-545cf
	534a412f3a743       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   b454f12ed3fe0       coredns-7db6d8ff4d-wn6nh
	fa492285e9f66       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago       Exited              kube-proxy                0                   9c2610533ce93       kube-proxy-wdjhn
	538119110afb1       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago       Exited              kube-scheduler            0                   1c1c2a5704369       kube-scheduler-ha-565925
	15b93b06d8221       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   ae49609366208       etcd-ha-565925
	
	
	==> coredns [0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b04258e36921b56cf5] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163] <==
	[INFO] 10.244.1.2:48212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000372595s
	[INFO] 10.244.1.2:38672 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000558623s
	[INFO] 10.244.1.2:39378 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001712401s
	[INFO] 10.244.2.2:60283 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000168931s
	[INFO] 10.244.0.4:44797 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009875834s
	[INFO] 10.244.0.4:48555 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169499s
	[INFO] 10.244.0.4:59395 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177597s
	[INFO] 10.244.1.2:59265 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000530757s
	[INFO] 10.244.1.2:47710 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001604733s
	[INFO] 10.244.1.2:52315 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138586s
	[INFO] 10.244.2.2:55693 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155911s
	[INFO] 10.244.2.2:58799 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094891s
	[INFO] 10.244.2.2:42423 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109708s
	[INFO] 10.244.0.4:50874 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174304s
	[INFO] 10.244.1.2:48744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098356s
	[INFO] 10.244.1.2:57572 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107588s
	[INFO] 10.244.1.2:43906 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000582793s
	[INFO] 10.244.0.4:36933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083881s
	[INFO] 10.244.0.4:57895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011453s
	[INFO] 10.244.1.2:33157 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149048s
	[INFO] 10.244.1.2:51327 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000136605s
	[INFO] 10.244.1.2:57659 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126557s
	[INFO] 10.244.2.2:42606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153767s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f] <==
	[INFO] 10.244.1.2:56818 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001759713s
	[INFO] 10.244.1.2:38288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.001994069s
	[INFO] 10.244.1.2:34752 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150866s
	[INFO] 10.244.1.2:40260 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146857s
	[INFO] 10.244.2.2:44655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154352s
	[INFO] 10.244.2.2:33459 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001816989s
	[INFO] 10.244.2.2:44738 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000324114s
	[INFO] 10.244.2.2:47736 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091876s
	[INFO] 10.244.2.2:44490 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001443467s
	[INFO] 10.244.0.4:55625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175656s
	[INFO] 10.244.0.4:39661 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080931s
	[INFO] 10.244.0.4:50296 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000636942s
	[INFO] 10.244.1.2:38824 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118172s
	[INFO] 10.244.2.2:42842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216365s
	[INFO] 10.244.2.2:59068 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011868s
	[INFO] 10.244.2.2:38486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000206394s
	[INFO] 10.244.2.2:33649 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110039s
	[INFO] 10.244.0.4:39573 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000202562s
	[INFO] 10.244.0.4:57326 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128886s
	[INFO] 10.244.1.2:39682 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000217002s
	[INFO] 10.244.2.2:39360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000367518s
	[INFO] 10.244.2.2:55914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000522453s
	[INFO] 10.244.2.2:54263 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00020711s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:37476->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-565925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T10_38_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:51:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:50:19 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:50:19 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:50:19 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:50:19 +0000   Mon, 10 Jun 2024 10:38:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    ha-565925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 81e39b112b50436db5c7fc16ce8eb53e
	  System UUID:                81e39b11-2b50-436d-b5c7-fc16ce8eb53e
	  Boot ID:                    afd4fe8d-84f7-41ff-9890-dc78b1ff1343
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6wmkd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-545cf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-wn6nh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-565925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-rnn59                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-565925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-565925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-wdjhn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-565925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-565925                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 104s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-565925 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-565925 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-565925 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-565925 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Warning  ContainerGCFailed        2m30s (x2 over 3m30s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           90s                    node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           88s                    node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	
	
	Name:               ha-565925-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_39_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:39:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:52:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:50:58 +0000   Mon, 10 Jun 2024 10:50:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:50:58 +0000   Mon, 10 Jun 2024 10:50:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:50:58 +0000   Mon, 10 Jun 2024 10:50:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:50:58 +0000   Mon, 10 Jun 2024 10:50:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-565925-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55a76fcaaea54bebb8694a2ff5e7d2ea
	  System UUID:                55a76fca-aea5-4beb-b869-4a2ff5e7d2ea
	  Boot ID:                    f2031124-7282-4f77-956b-81d80d2807d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8g67g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-565925-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-9jv7q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-565925-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-565925-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-vbgnx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-565925-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-565925-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 93s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-565925-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-565925-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-565925-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  NodeNotReady             8m47s                node-controller  Node ha-565925-m02 status is now: NodeNotReady
	  Normal  Starting                 2m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node ha-565925-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x7 over 2m2s)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           90s                  node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           88s                  node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           39s                  node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	
	
	Name:               ha-565925-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_40_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:40:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:51:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:51:27 +0000   Mon, 10 Jun 2024 10:40:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:51:27 +0000   Mon, 10 Jun 2024 10:40:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:51:27 +0000   Mon, 10 Jun 2024 10:40:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:51:27 +0000   Mon, 10 Jun 2024 10:41:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.76
	  Hostname:    ha-565925-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8de12ccd43b4441ac42fe5a4b57ed64
	  System UUID:                c8de12cc-d43b-4441-ac42-fe5a4b57ed64
	  Boot ID:                    b565898f-2a77-4d3d-89a2-2abb6adbadf9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jmbg2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-565925-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-9tcng                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-565925-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-565925-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-d44ft                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-565925-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-565925-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 46s                kube-proxy       
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-565925-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-565925-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-565925-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	  Normal   RegisteredNode           90s                node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	  Normal   RegisteredNode           88s                node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node ha-565925-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node ha-565925-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node ha-565925-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 63s                kubelet          Node ha-565925-m03 has been rebooted, boot id: b565898f-2a77-4d3d-89a2-2abb6adbadf9
	  Normal   RegisteredNode           39s                node-controller  Node ha-565925-m03 event: Registered Node ha-565925-m03 in Controller
	
	
	Name:               ha-565925-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_41_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:41:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:51:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:51:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:51:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:51:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:51:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    ha-565925-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5196e1f9b5684ae78368fe8d66c3d24c
	  System UUID:                5196e1f9-b568-4ae7-8368-fe8d66c3d24c
	  Boot ID:                    fa33354e-1710-42c3-b31e-616fe87f501e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-lkf5b       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-dpsbw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 9m56s              kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-565925-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           9m57s              node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   NodeReady                9m51s              kubelet          Node ha-565925-m04 status is now: NodeReady
	  Normal   RegisteredNode           90s                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           88s                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   NodeNotReady             50s                node-controller  Node ha-565925-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           39s                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-565925-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-565925-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-565925-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-565925-m04 has been rebooted, boot id: fa33354e-1710-42c3-b31e-616fe87f501e
	  Normal   NodeReady                8s (x2 over 8s)    kubelet          Node ha-565925-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.150837] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.061096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061390] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.176128] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.114890] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264219] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.909095] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +3.637727] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.061637] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.135890] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.082129] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.392312] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.014769] kauditd_printk_skb: 43 callbacks suppressed
	[  +9.917879] kauditd_printk_skb: 21 callbacks suppressed
	[Jun10 10:49] systemd-fstab-generator[3825]: Ignoring "noauto" option for root device
	[  +0.169090] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[  +0.188008] systemd-fstab-generator[3851]: Ignoring "noauto" option for root device
	[  +0.156438] systemd-fstab-generator[3863]: Ignoring "noauto" option for root device
	[  +0.268788] systemd-fstab-generator[3891]: Ignoring "noauto" option for root device
	[  +0.739516] systemd-fstab-generator[3989]: Ignoring "noauto" option for root device
	[ +12.921754] kauditd_printk_skb: 218 callbacks suppressed
	[ +10.073147] kauditd_printk_skb: 1 callbacks suppressed
	[Jun10 10:50] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.065204] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd] <==
	2024/06/10 10:47:59 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-10T10:47:59.717672Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.157438722s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-06-10T10:47:59.727283Z","caller":"traceutil/trace.go:171","msg":"trace[2016543740] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; }","duration":"7.167054579s","start":"2024-06-10T10:47:52.560223Z","end":"2024-06-10T10:47:59.727278Z","steps":["trace[2016543740] 'agreement among raft nodes before linearized reading'  (duration: 7.157438296s)"],"step_count":1}
	2024/06/10 10:47:59 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-10T10:47:59.859806Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":16210302245861675405,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-06-10T10:47:59.98587Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T10:47:59.985953Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-10T10:47:59.986031Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"7fe6bf77aaafe0f6","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-10T10:47:59.986226Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986285Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986337Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986484Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986541Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986597Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986628Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986651Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.986681Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.986719Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.986922Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.986994Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.987053Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.987084Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.99121Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-06-10T10:47:59.991336Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-06-10T10:47:59.99139Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-565925","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	
	
	==> etcd [a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5] <==
	{"level":"warn","ts":"2024-06-10T10:51:03.90469Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"55cc759d8ab60945","error":"Get \"https://192.168.39.76:2380/version\": dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-10T10:51:04.025231Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:51:04.025364Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:51:04.031134Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:51:04.045193Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7fe6bf77aaafe0f6","to":"55cc759d8ab60945","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-10T10:51:04.045241Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:51:04.050826Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7fe6bf77aaafe0f6","to":"55cc759d8ab60945","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-10T10:51:04.050918Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"warn","ts":"2024-06-10T10:51:04.944157Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"55cc759d8ab60945","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-10T10:51:04.94431Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"55cc759d8ab60945","rtt":"0s","error":"dial tcp 192.168.39.76:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-10T10:51:08.975929Z","caller":"traceutil/trace.go:171","msg":"trace[1682434291] transaction","detail":"{read_only:false; response_revision:2359; number_of_response:1; }","duration":"178.560074ms","start":"2024-06-10T10:51:08.797323Z","end":"2024-06-10T10:51:08.975883Z","steps":["trace[1682434291] 'process raft request'  (duration: 178.374547ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:51:19.551175Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"71310573b672730c","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"46.69705ms"}
	{"level":"warn","ts":"2024-06-10T10:51:19.551295Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"55cc759d8ab60945","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"46.82231ms"}
	{"level":"info","ts":"2024-06-10T10:51:19.554022Z","caller":"traceutil/trace.go:171","msg":"trace[115985561] linearizableReadLoop","detail":"{readStateIndex:2808; appliedIndex:2809; }","duration":"124.027637ms","start":"2024-06-10T10:51:19.429968Z","end":"2024-06-10T10:51:19.553996Z","steps":["trace[115985561] 'read index received'  (duration: 124.022577ms)","trace[115985561] 'applied index is now lower than readState.Index'  (duration: 3.642µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T10:51:19.554308Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.281511ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T10:51:19.554408Z","caller":"traceutil/trace.go:171","msg":"trace[201109135] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2423; }","duration":"124.467006ms","start":"2024-06-10T10:51:19.429929Z","end":"2024-06-10T10:51:19.554396Z","steps":["trace[201109135] 'agreement among raft nodes before linearized reading'  (duration: 124.274258ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:51:19.554838Z","caller":"traceutil/trace.go:171","msg":"trace[975357648] transaction","detail":"{read_only:false; response_revision:2424; number_of_response:1; }","duration":"119.50791ms","start":"2024-06-10T10:51:19.435317Z","end":"2024-06-10T10:51:19.554825Z","steps":["trace[975357648] 'process raft request'  (duration: 119.346733ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:51:55.58919Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"71310573b672730c","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"85.560155ms"}
	{"level":"warn","ts":"2024-06-10T10:51:55.589284Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"55cc759d8ab60945","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"85.658179ms"}
	{"level":"info","ts":"2024-06-10T10:51:55.589843Z","caller":"traceutil/trace.go:171","msg":"trace[1333195628] transaction","detail":"{read_only:false; response_revision:2567; number_of_response:1; }","duration":"212.561626ms","start":"2024-06-10T10:51:55.377245Z","end":"2024-06-10T10:51:55.589807Z","steps":["trace[1333195628] 'process raft request'  (duration: 212.365877ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:51:55.590491Z","caller":"traceutil/trace.go:171","msg":"trace[1575856812] linearizableReadLoop","detail":"{readStateIndex:2985; appliedIndex:2986; }","duration":"169.543064ms","start":"2024-06-10T10:51:55.420885Z","end":"2024-06-10T10:51:55.590428Z","steps":["trace[1575856812] 'read index received'  (duration: 169.53759ms)","trace[1575856812] 'applied index is now lower than readState.Index'  (duration: 4.2µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T10:51:55.59097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.000978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-565925-m03\" ","response":"range_response_count:1 size:5677"}
	{"level":"info","ts":"2024-06-10T10:51:55.591071Z","caller":"traceutil/trace.go:171","msg":"trace[1171253762] range","detail":"{range_begin:/registry/minions/ha-565925-m03; range_end:; response_count:1; response_revision:2567; }","duration":"170.206364ms","start":"2024-06-10T10:51:55.420846Z","end":"2024-06-10T10:51:55.591053Z","steps":["trace[1171253762] 'agreement among raft nodes before linearized reading'  (duration: 169.845182ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:51:55.599093Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.7488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T10:51:55.599162Z","caller":"traceutil/trace.go:171","msg":"trace[1312235231] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2567; }","duration":"168.849387ms","start":"2024-06-10T10:51:55.430299Z","end":"2024-06-10T10:51:55.599149Z","steps":["trace[1312235231] 'agreement among raft nodes before linearized reading'  (duration: 164.303188ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:52:00 up 14 min,  0 users,  load average: 0.48, 0.46, 0.31
	Linux ha-565925 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3] <==
	I0610 10:51:26.618917       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:51:36.689075       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:51:36.689195       1 main.go:227] handling current node
	I0610 10:51:36.689221       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:51:36.689238       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:51:36.689455       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0610 10:51:36.689477       1 main.go:250] Node ha-565925-m03 has CIDR [10.244.2.0/24] 
	I0610 10:51:36.689552       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:51:36.689573       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:51:46.697984       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:51:46.698095       1 main.go:227] handling current node
	I0610 10:51:46.698122       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:51:46.698139       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:51:46.698274       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0610 10:51:46.698295       1 main.go:250] Node ha-565925-m03 has CIDR [10.244.2.0/24] 
	I0610 10:51:46.698362       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:51:46.698384       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:51:56.705776       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:51:56.705833       1 main.go:227] handling current node
	I0610 10:51:56.705847       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:51:56.705853       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:51:56.706015       1 main.go:223] Handling node with IPs: map[192.168.39.76:{}]
	I0610 10:51:56.706036       1 main.go:250] Node ha-565925-m03 has CIDR [10.244.2.0/24] 
	I0610 10:51:56.706112       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:51:56.706137       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [6d2fc31bedad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47] <==
	I0610 10:49:34.580026       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 10:49:44.846194       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0610 10:49:54.853492       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0610 10:49:55.854398       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0610 10:49:57.855204       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0610 10:50:00.857148       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kube-apiserver [10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef] <==
	I0610 10:49:34.359535       1 options.go:221] external host was not specified, using 192.168.39.208
	I0610 10:49:34.361960       1 server.go:148] Version: v1.30.1
	I0610 10:49:34.365190       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:49:35.246965       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0610 10:49:35.263293       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0610 10:49:35.263336       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0610 10:49:35.263547       1 instance.go:299] Using reconciler: lease
	I0610 10:49:35.264009       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0610 10:49:55.243480       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0610 10:49:55.244605       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0610 10:49:55.265519       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2] <==
	I0610 10:50:17.816124       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 10:50:17.816246       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 10:50:17.898928       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 10:50:17.899005       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 10:50:17.899086       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 10:50:17.901846       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 10:50:17.902077       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 10:50:17.902140       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 10:50:17.902195       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 10:50:17.904366       1 aggregator.go:165] initial CRD sync complete...
	I0610 10:50:17.904414       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 10:50:17.904421       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 10:50:17.904426       1 cache.go:39] Caches are synced for autoregister controller
	I0610 10:50:17.911629       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 10:50:17.914467       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 10:50:17.924454       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 10:50:17.924494       1 policy_source.go:224] refreshing policies
	I0610 10:50:17.994953       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0610 10:50:18.097606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.76]
	I0610 10:50:18.101311       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 10:50:18.140209       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0610 10:50:18.163711       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0610 10:50:18.813070       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0610 10:50:19.224408       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.208 192.168.39.76]
	W0610 10:50:39.226166       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.208 192.168.39.230]
	
	
	==> kube-controller-manager [a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627] <==
	I0610 10:49:35.753024       1 serving.go:380] Generated self-signed cert in-memory
	I0610 10:49:36.068608       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 10:49:36.068712       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:49:36.070825       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 10:49:36.071542       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 10:49:36.071675       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 10:49:36.071819       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0610 10:49:56.272999       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.208:8443/healthz\": dial tcp 192.168.39.208:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d] <==
	I0610 10:50:30.209095       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 10:50:30.214402       1 shared_informer.go:320] Caches are synced for expand
	I0610 10:50:30.216840       1 shared_informer.go:320] Caches are synced for ephemeral
	I0610 10:50:30.233600       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 10:50:30.238592       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 10:50:30.262225       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 10:50:30.269842       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 10:50:30.272431       1 shared_informer.go:320] Caches are synced for PVC protection
	I0610 10:50:30.298347       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925"
	I0610 10:50:30.298464       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m02"
	I0610 10:50:30.298927       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m03"
	I0610 10:50:30.299008       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m04"
	I0610 10:50:30.303316       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0610 10:50:30.694356       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 10:50:30.740016       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 10:50:30.740091       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 10:50:58.233868       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.835679ms"
	I0610 10:50:58.234995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="116.549µs"
	I0610 10:51:16.689316       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.579813ms"
	I0610 10:51:16.690464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.712µs"
	I0610 10:51:19.982556       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="46.664035ms"
	I0610 10:51:19.982713       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.886µs"
	I0610 10:51:29.943376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.019845ms"
	I0610 10:51:29.943893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="224.207µs"
	I0610 10:51:52.327326       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-565925-m04"
	
	
	==> kube-proxy [d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566] <==
	E0610 10:49:57.977325       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-565925\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0610 10:50:16.410641       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-565925\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0610 10:50:16.410982       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0610 10:50:16.480570       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 10:50:16.480704       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 10:50:16.480733       1 server_linux.go:165] "Using iptables Proxier"
	I0610 10:50:16.483458       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 10:50:16.483693       1 server.go:872] "Version info" version="v1.30.1"
	I0610 10:50:16.483731       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:50:16.485415       1 config.go:192] "Starting service config controller"
	I0610 10:50:16.485458       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 10:50:16.485503       1 config.go:101] "Starting endpoint slice config controller"
	I0610 10:50:16.485519       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 10:50:16.486337       1 config.go:319] "Starting node config controller"
	I0610 10:50:16.486367       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0610 10:50:19.481660       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.481945       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483161       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:50:19.483323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:50:19.483590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0610 10:50:20.586480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 10:50:20.885886       1 shared_informer.go:320] Caches are synced for service config
	I0610 10:50:20.886651       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91] <==
	E0610 10:46:44.441668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:47.513142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:47.513201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:47.513147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:47.513269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:47.513416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:47.513293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:53.659409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:53.659583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:53.659686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:53.659629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:53.659443       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:53.659911       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:02.874609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:02.874673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:02.874818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:02.874864       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:09.018265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:09.018331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:18.233358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:18.233460       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:30.522595       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:30.522912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:33.593898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:33.594113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82] <==
	W0610 10:47:56.323834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 10:47:56.323909       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 10:47:56.655790       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 10:47:56.655830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 10:47:56.805511       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 10:47:56.805649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 10:47:56.972826       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 10:47:56.972956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 10:47:56.975078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 10:47:56.975114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 10:47:57.017651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 10:47:57.017727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 10:47:57.058312       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 10:47:57.058400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 10:47:57.334507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 10:47:57.334591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 10:47:57.721852       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 10:47:57.721992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 10:47:57.743513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 10:47:57.743643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 10:47:57.756633       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 10:47:57.756855       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 10:47:59.648277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 10:47:59.648317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 10:47:59.696248       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1] <==
	W0610 10:50:12.573916       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.208:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:12.573992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.208:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:13.018263       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:13.018403       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:13.171329       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:13.171445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:13.349389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:13.349453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.073188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.073242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.293199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.293274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.389307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.208:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.389425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.208:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.514209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.514616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:15.509656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:15.509725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:17.832639       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 10:50:17.832863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 10:50:17.833061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 10:50:17.833139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 10:50:17.833237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 10:50:17.833265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 10:50:30.277918       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 10:50:25 ha-565925 kubelet[1367]: I0610 10:50:25.811393    1367 scope.go:117] "RemoveContainer" containerID="18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa"
	Jun 10 10:50:25 ha-565925 kubelet[1367]: E0610 10:50:25.811805    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(0ca60a36-c445-4520-b857-7df39dfed848)\"" pod="kube-system/storage-provisioner" podUID="0ca60a36-c445-4520-b857-7df39dfed848"
	Jun 10 10:50:29 ha-565925 kubelet[1367]: I0610 10:50:29.811328    1367 scope.go:117] "RemoveContainer" containerID="6d2fc31bedad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47"
	Jun 10 10:50:29 ha-565925 kubelet[1367]: E0610 10:50:29.812115    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-rnn59_kube-system(9141e131-eebc-4f51-8b55-46ff649ffaee)\"" pod="kube-system/kindnet-rnn59" podUID="9141e131-eebc-4f51-8b55-46ff649ffaee"
	Jun 10 10:50:30 ha-565925 kubelet[1367]: E0610 10:50:30.831988    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:50:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:50:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:50:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:50:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:50:30 ha-565925 kubelet[1367]: I0610 10:50:30.872677    1367 scope.go:117] "RemoveContainer" containerID="bc4df07252fb45872d41728c3386619b228ccc7df4253b6852eb5655c1661866"
	Jun 10 10:50:37 ha-565925 kubelet[1367]: I0610 10:50:37.811394    1367 scope.go:117] "RemoveContainer" containerID="18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa"
	Jun 10 10:50:37 ha-565925 kubelet[1367]: E0610 10:50:37.812110    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(0ca60a36-c445-4520-b857-7df39dfed848)\"" pod="kube-system/storage-provisioner" podUID="0ca60a36-c445-4520-b857-7df39dfed848"
	Jun 10 10:50:41 ha-565925 kubelet[1367]: I0610 10:50:41.810691    1367 scope.go:117] "RemoveContainer" containerID="6d2fc31bedad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47"
	Jun 10 10:50:41 ha-565925 kubelet[1367]: E0610 10:50:41.811441    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-rnn59_kube-system(9141e131-eebc-4f51-8b55-46ff649ffaee)\"" pod="kube-system/kindnet-rnn59" podUID="9141e131-eebc-4f51-8b55-46ff649ffaee"
	Jun 10 10:50:51 ha-565925 kubelet[1367]: I0610 10:50:51.811189    1367 scope.go:117] "RemoveContainer" containerID="18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa"
	Jun 10 10:50:54 ha-565925 kubelet[1367]: I0610 10:50:54.274112    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-6wmkd" podStartSLOduration=570.67640809 podStartE2EDuration="9m33.27408245s" podCreationTimestamp="2024-06-10 10:41:21 +0000 UTC" firstStartedPulling="2024-06-10 10:41:21.82200867 +0000 UTC m=+171.156915668" lastFinishedPulling="2024-06-10 10:41:24.419683031 +0000 UTC m=+173.754590028" observedRunningTime="2024-06-10 10:41:25.563101157 +0000 UTC m=+174.898008162" watchObservedRunningTime="2024-06-10 10:50:54.27408245 +0000 UTC m=+743.608989455"
	Jun 10 10:50:55 ha-565925 kubelet[1367]: I0610 10:50:55.810721    1367 scope.go:117] "RemoveContainer" containerID="6d2fc31bedad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47"
	Jun 10 10:51:17 ha-565925 kubelet[1367]: I0610 10:51:17.810140    1367 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-565925" podUID="039ffa3e-aac6-4bdc-a576-0158c7fb283d"
	Jun 10 10:51:17 ha-565925 kubelet[1367]: I0610 10:51:17.828628    1367 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-565925"
	Jun 10 10:51:19 ha-565925 kubelet[1367]: I0610 10:51:19.916173    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-565925" podStartSLOduration=2.916150291 podStartE2EDuration="2.916150291s" podCreationTimestamp="2024-06-10 10:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 10:51:19.915605871 +0000 UTC m=+769.250512871" watchObservedRunningTime="2024-06-10 10:51:19.916150291 +0000 UTC m=+769.251057296"
	Jun 10 10:51:30 ha-565925 kubelet[1367]: E0610 10:51:30.830366    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:51:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:51:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:51:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:51:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 10:51:59.707652   29424 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19046-3880/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565925 -n ha-565925
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 stop -v=7 --alsologtostderr
E0610 10:54:12.453525   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 stop -v=7 --alsologtostderr: exit status 82 (2m0.470295838s)

                                                
                                                
-- stdout --
	* Stopping node "ha-565925-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:52:19.365965   29837 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:52:19.366204   29837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:52:19.366212   29837 out.go:304] Setting ErrFile to fd 2...
	I0610 10:52:19.366217   29837 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:52:19.366389   29837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:52:19.366580   29837 out.go:298] Setting JSON to false
	I0610 10:52:19.366651   29837 mustload.go:65] Loading cluster: ha-565925
	I0610 10:52:19.366971   29837 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:52:19.367064   29837 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:52:19.367244   29837 mustload.go:65] Loading cluster: ha-565925
	I0610 10:52:19.367368   29837 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:52:19.367398   29837 stop.go:39] StopHost: ha-565925-m04
	I0610 10:52:19.367750   29837 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:52:19.367817   29837 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:52:19.382386   29837 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0610 10:52:19.382852   29837 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:52:19.383404   29837 main.go:141] libmachine: Using API Version  1
	I0610 10:52:19.383427   29837 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:52:19.383743   29837 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:52:19.386075   29837 out.go:177] * Stopping node "ha-565925-m04"  ...
	I0610 10:52:19.387497   29837 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0610 10:52:19.387528   29837 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:52:19.387718   29837 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0610 10:52:19.387752   29837 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:52:19.390460   29837 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:52:19.390926   29837 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:51:46 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:52:19.390964   29837 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:52:19.391150   29837 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:52:19.391324   29837 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:52:19.391478   29837 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:52:19.391619   29837 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	I0610 10:52:19.479562   29837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0610 10:52:19.531715   29837 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0610 10:52:19.583601   29837 main.go:141] libmachine: Stopping "ha-565925-m04"...
	I0610 10:52:19.583623   29837 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:52:19.585335   29837 main.go:141] libmachine: (ha-565925-m04) Calling .Stop
	I0610 10:52:19.588752   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 0/120
	I0610 10:52:20.590793   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 1/120
	I0610 10:52:21.592366   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 2/120
	I0610 10:52:22.594057   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 3/120
	I0610 10:52:23.595518   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 4/120
	I0610 10:52:24.597385   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 5/120
	I0610 10:52:25.599611   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 6/120
	I0610 10:52:26.600889   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 7/120
	I0610 10:52:27.602893   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 8/120
	I0610 10:52:28.605206   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 9/120
	I0610 10:52:29.607473   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 10/120
	I0610 10:52:30.608715   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 11/120
	I0610 10:52:31.610165   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 12/120
	I0610 10:52:32.611385   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 13/120
	I0610 10:52:33.612641   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 14/120
	I0610 10:52:34.614486   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 15/120
	I0610 10:52:35.615903   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 16/120
	I0610 10:52:36.617440   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 17/120
	I0610 10:52:37.618980   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 18/120
	I0610 10:52:38.620512   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 19/120
	I0610 10:52:39.622773   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 20/120
	I0610 10:52:40.624389   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 21/120
	I0610 10:52:41.625835   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 22/120
	I0610 10:52:42.627689   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 23/120
	I0610 10:52:43.629208   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 24/120
	I0610 10:52:44.631384   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 25/120
	I0610 10:52:45.632691   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 26/120
	I0610 10:52:46.634162   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 27/120
	I0610 10:52:47.635442   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 28/120
	I0610 10:52:48.636872   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 29/120
	I0610 10:52:49.638713   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 30/120
	I0610 10:52:50.640056   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 31/120
	I0610 10:52:51.641457   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 32/120
	I0610 10:52:52.642852   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 33/120
	I0610 10:52:53.644269   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 34/120
	I0610 10:52:54.646304   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 35/120
	I0610 10:52:55.647559   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 36/120
	I0610 10:52:56.648752   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 37/120
	I0610 10:52:57.650140   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 38/120
	I0610 10:52:58.651590   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 39/120
	I0610 10:52:59.653411   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 40/120
	I0610 10:53:00.654868   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 41/120
	I0610 10:53:01.656578   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 42/120
	I0610 10:53:02.657990   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 43/120
	I0610 10:53:03.659310   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 44/120
	I0610 10:53:04.661390   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 45/120
	I0610 10:53:05.663263   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 46/120
	I0610 10:53:06.664763   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 47/120
	I0610 10:53:07.666027   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 48/120
	I0610 10:53:08.667526   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 49/120
	I0610 10:53:09.669592   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 50/120
	I0610 10:53:10.671023   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 51/120
	I0610 10:53:11.672580   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 52/120
	I0610 10:53:12.674033   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 53/120
	I0610 10:53:13.675291   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 54/120
	I0610 10:53:14.677051   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 55/120
	I0610 10:53:15.678655   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 56/120
	I0610 10:53:16.680416   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 57/120
	I0610 10:53:17.682554   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 58/120
	I0610 10:53:18.684157   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 59/120
	I0610 10:53:19.686675   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 60/120
	I0610 10:53:20.688052   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 61/120
	I0610 10:53:21.689767   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 62/120
	I0610 10:53:22.691351   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 63/120
	I0610 10:53:23.692940   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 64/120
	I0610 10:53:24.694390   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 65/120
	I0610 10:53:25.695772   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 66/120
	I0610 10:53:26.697308   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 67/120
	I0610 10:53:27.699545   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 68/120
	I0610 10:53:28.701257   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 69/120
	I0610 10:53:29.703325   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 70/120
	I0610 10:53:30.704541   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 71/120
	I0610 10:53:31.705849   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 72/120
	I0610 10:53:32.707624   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 73/120
	I0610 10:53:33.708898   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 74/120
	I0610 10:53:34.711062   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 75/120
	I0610 10:53:35.712310   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 76/120
	I0610 10:53:36.714556   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 77/120
	I0610 10:53:37.716017   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 78/120
	I0610 10:53:38.717324   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 79/120
	I0610 10:53:39.719309   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 80/120
	I0610 10:53:40.721210   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 81/120
	I0610 10:53:41.723446   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 82/120
	I0610 10:53:42.724707   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 83/120
	I0610 10:53:43.725905   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 84/120
	I0610 10:53:44.727438   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 85/120
	I0610 10:53:45.728811   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 86/120
	I0610 10:53:46.730048   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 87/120
	I0610 10:53:47.732492   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 88/120
	I0610 10:53:48.734130   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 89/120
	I0610 10:53:49.736657   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 90/120
	I0610 10:53:50.737919   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 91/120
	I0610 10:53:51.739368   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 92/120
	I0610 10:53:52.740780   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 93/120
	I0610 10:53:53.742510   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 94/120
	I0610 10:53:54.744747   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 95/120
	I0610 10:53:55.746721   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 96/120
	I0610 10:53:56.748506   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 97/120
	I0610 10:53:57.749881   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 98/120
	I0610 10:53:58.751316   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 99/120
	I0610 10:53:59.753427   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 100/120
	I0610 10:54:00.755350   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 101/120
	I0610 10:54:01.756998   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 102/120
	I0610 10:54:02.758426   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 103/120
	I0610 10:54:03.759887   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 104/120
	I0610 10:54:04.761475   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 105/120
	I0610 10:54:05.763001   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 106/120
	I0610 10:54:06.764755   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 107/120
	I0610 10:54:07.766084   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 108/120
	I0610 10:54:08.767358   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 109/120
	I0610 10:54:09.769808   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 110/120
	I0610 10:54:10.771506   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 111/120
	I0610 10:54:11.773481   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 112/120
	I0610 10:54:12.775728   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 113/120
	I0610 10:54:13.777273   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 114/120
	I0610 10:54:14.779574   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 115/120
	I0610 10:54:15.780892   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 116/120
	I0610 10:54:16.782276   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 117/120
	I0610 10:54:17.783451   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 118/120
	I0610 10:54:18.784678   29837 main.go:141] libmachine: (ha-565925-m04) Waiting for machine to stop 119/120
	I0610 10:54:19.786090   29837 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0610 10:54:19.786159   29837 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0610 10:54:19.787914   29837 out.go:177] 
	W0610 10:54:19.789199   29837 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0610 10:54:19.789219   29837 out.go:239] * 
	* 
	W0610 10:54:19.792122   29837 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 10:54:19.793336   29837 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-565925 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr: exit status 3 (18.991411691s)

                                                
                                                
-- stdout --
	ha-565925
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-565925-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:54:19.837638   30298 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:54:19.837841   30298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:54:19.837848   30298 out.go:304] Setting ErrFile to fd 2...
	I0610 10:54:19.837853   30298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:54:19.838017   30298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:54:19.838161   30298 out.go:298] Setting JSON to false
	I0610 10:54:19.838183   30298 mustload.go:65] Loading cluster: ha-565925
	I0610 10:54:19.838241   30298 notify.go:220] Checking for updates...
	I0610 10:54:19.838541   30298 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:54:19.838555   30298 status.go:255] checking status of ha-565925 ...
	I0610 10:54:19.838920   30298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:19.838974   30298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:19.858033   30298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35619
	I0610 10:54:19.858468   30298 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:19.858972   30298 main.go:141] libmachine: Using API Version  1
	I0610 10:54:19.858998   30298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:19.859317   30298 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:19.859499   30298 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:54:19.861108   30298 status.go:330] ha-565925 host status = "Running" (err=<nil>)
	I0610 10:54:19.861129   30298 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:54:19.861404   30298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:19.861443   30298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:19.875828   30298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33729
	I0610 10:54:19.876270   30298 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:19.876669   30298 main.go:141] libmachine: Using API Version  1
	I0610 10:54:19.876689   30298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:19.876998   30298 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:19.877178   30298 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:54:19.880095   30298 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:19.880529   30298 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:19.880557   30298 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:19.880671   30298 host.go:66] Checking if "ha-565925" exists ...
	I0610 10:54:19.881017   30298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:19.881062   30298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:19.896063   30298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38907
	I0610 10:54:19.896501   30298 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:19.897193   30298 main.go:141] libmachine: Using API Version  1
	I0610 10:54:19.897214   30298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:19.897551   30298 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:19.897735   30298 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:19.897903   30298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:54:19.897927   30298 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:19.900986   30298 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:19.901533   30298 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:19.901561   30298 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:19.901724   30298 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:19.901896   30298 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:19.902027   30298 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:19.902148   30298 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:54:19.990978   30298 ssh_runner.go:195] Run: systemctl --version
	I0610 10:54:20.000316   30298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:54:20.019733   30298 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:54:20.019765   30298 api_server.go:166] Checking apiserver status ...
	I0610 10:54:20.019803   30298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:54:20.043002   30298 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5200/cgroup
	W0610 10:54:20.053234   30298 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5200/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:54:20.053285   30298 ssh_runner.go:195] Run: ls
	I0610 10:54:20.057684   30298 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:54:20.062055   30298 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:54:20.062076   30298 status.go:422] ha-565925 apiserver status = Running (err=<nil>)
	I0610 10:54:20.062086   30298 status.go:257] ha-565925 status: &{Name:ha-565925 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:54:20.062103   30298 status.go:255] checking status of ha-565925-m02 ...
	I0610 10:54:20.062455   30298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:20.062491   30298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:20.078127   30298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37739
	I0610 10:54:20.078560   30298 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:20.079082   30298 main.go:141] libmachine: Using API Version  1
	I0610 10:54:20.079102   30298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:20.079489   30298 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:20.079720   30298 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 10:54:20.081467   30298 status.go:330] ha-565925-m02 host status = "Running" (err=<nil>)
	I0610 10:54:20.081489   30298 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:54:20.081737   30298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:20.081770   30298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:20.102345   30298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35621
	I0610 10:54:20.102808   30298 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:20.103331   30298 main.go:141] libmachine: Using API Version  1
	I0610 10:54:20.103355   30298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:20.103697   30298 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:20.103875   30298 main.go:141] libmachine: (ha-565925-m02) Calling .GetIP
	I0610 10:54:20.106607   30298 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:54:20.107067   30298 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:49:45 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:54:20.107089   30298 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:54:20.107222   30298 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 10:54:20.107528   30298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:20.107565   30298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:20.121486   30298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0610 10:54:20.121872   30298 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:20.122369   30298 main.go:141] libmachine: Using API Version  1
	I0610 10:54:20.122394   30298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:20.122723   30298 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:20.122934   30298 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 10:54:20.123132   30298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:54:20.123160   30298 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHHostname
	I0610 10:54:20.126006   30298 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:54:20.126370   30298 main.go:141] libmachine: (ha-565925-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:fd:0f", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:49:45 +0000 UTC Type:0 Mac:52:54:00:c0:fd:0f Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:ha-565925-m02 Clientid:01:52:54:00:c0:fd:0f}
	I0610 10:54:20.126400   30298 main.go:141] libmachine: (ha-565925-m02) DBG | domain ha-565925-m02 has defined IP address 192.168.39.230 and MAC address 52:54:00:c0:fd:0f in network mk-ha-565925
	I0610 10:54:20.126490   30298 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHPort
	I0610 10:54:20.126681   30298 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHKeyPath
	I0610 10:54:20.126854   30298 main.go:141] libmachine: (ha-565925-m02) Calling .GetSSHUsername
	I0610 10:54:20.127027   30298 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m02/id_rsa Username:docker}
	I0610 10:54:20.223313   30298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 10:54:20.249077   30298 kubeconfig.go:125] found "ha-565925" server: "https://192.168.39.254:8443"
	I0610 10:54:20.249106   30298 api_server.go:166] Checking apiserver status ...
	I0610 10:54:20.249145   30298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 10:54:20.266328   30298 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0610 10:54:20.276250   30298 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 10:54:20.276313   30298 ssh_runner.go:195] Run: ls
	I0610 10:54:20.280394   30298 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0610 10:54:20.284467   30298 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0610 10:54:20.284485   30298 status.go:422] ha-565925-m02 apiserver status = Running (err=<nil>)
	I0610 10:54:20.284494   30298 status.go:257] ha-565925-m02 status: &{Name:ha-565925-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 10:54:20.284507   30298 status.go:255] checking status of ha-565925-m04 ...
	I0610 10:54:20.284763   30298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:20.284797   30298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:20.299865   30298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0610 10:54:20.300233   30298 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:20.300611   30298 main.go:141] libmachine: Using API Version  1
	I0610 10:54:20.300630   30298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:20.300979   30298 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:20.301152   30298 main.go:141] libmachine: (ha-565925-m04) Calling .GetState
	I0610 10:54:20.302567   30298 status.go:330] ha-565925-m04 host status = "Running" (err=<nil>)
	I0610 10:54:20.302582   30298 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:54:20.302832   30298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:20.302866   30298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:20.316900   30298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34251
	I0610 10:54:20.317358   30298 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:20.317819   30298 main.go:141] libmachine: Using API Version  1
	I0610 10:54:20.317844   30298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:20.318163   30298 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:20.318353   30298 main.go:141] libmachine: (ha-565925-m04) Calling .GetIP
	I0610 10:54:20.321015   30298 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:54:20.321499   30298 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:51:46 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:54:20.321529   30298 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:54:20.321644   30298 host.go:66] Checking if "ha-565925-m04" exists ...
	I0610 10:54:20.322023   30298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:20.322061   30298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:20.336412   30298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40433
	I0610 10:54:20.336746   30298 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:20.337176   30298 main.go:141] libmachine: Using API Version  1
	I0610 10:54:20.337197   30298 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:20.337487   30298 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:20.337673   30298 main.go:141] libmachine: (ha-565925-m04) Calling .DriverName
	I0610 10:54:20.337829   30298 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 10:54:20.337847   30298 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHHostname
	I0610 10:54:20.340670   30298 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:54:20.341100   30298 main.go:141] libmachine: (ha-565925-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:20:94", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:51:46 +0000 UTC Type:0 Mac:52:54:00:c5:20:94 Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:ha-565925-m04 Clientid:01:52:54:00:c5:20:94}
	I0610 10:54:20.341125   30298 main.go:141] libmachine: (ha-565925-m04) DBG | domain ha-565925-m04 has defined IP address 192.168.39.229 and MAC address 52:54:00:c5:20:94 in network mk-ha-565925
	I0610 10:54:20.341281   30298 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHPort
	I0610 10:54:20.341462   30298 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHKeyPath
	I0610 10:54:20.341625   30298 main.go:141] libmachine: (ha-565925-m04) Calling .GetSSHUsername
	I0610 10:54:20.341747   30298 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m04/id_rsa Username:docker}
	W0610 10:54:38.785139   30298 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.229:22: connect: no route to host
	W0610 10:54:38.785218   30298 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.229:22: connect: no route to host
	E0610 10:54:38.785233   30298 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.229:22: connect: no route to host
	I0610 10:54:38.785240   30298 status.go:257] ha-565925-m04 status: &{Name:ha-565925-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0610 10:54:38.785257   30298 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.229:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565925 -n ha-565925
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565925 logs -n 25: (1.648870169s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-565925 ssh -n ha-565925-m02 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04:/home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m04 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp testdata/cp-test.txt                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1107448961/001/cp-test_ha-565925-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925:/home/docker/cp-test_ha-565925-m04_ha-565925.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925 sudo cat                                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m02:/home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m02 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03:/home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m03 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565925 node stop m02 -v=7                                                     | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-565925 node start m02 -v=7                                                    | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565925 -v=7                                                           | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-565925 -v=7                                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-565925 --wait=true -v=7                                                    | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:47 UTC | 10 Jun 24 10:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565925                                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:51 UTC |                     |
	| node    | ha-565925 node delete m03 -v=7                                                   | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:52 UTC | 10 Jun 24 10:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-565925 stop -v=7                                                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:47:58
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:47:58.708897   28147 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:47:58.709187   28147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:47:58.709198   28147 out.go:304] Setting ErrFile to fd 2...
	I0610 10:47:58.709205   28147 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:47:58.709390   28147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:47:58.709943   28147 out.go:298] Setting JSON to false
	I0610 10:47:58.710862   28147 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1820,"bootTime":1718014659,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:47:58.710921   28147 start.go:139] virtualization: kvm guest
	I0610 10:47:58.713146   28147 out.go:177] * [ha-565925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:47:58.714611   28147 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:47:58.714644   28147 notify.go:220] Checking for updates...
	I0610 10:47:58.715823   28147 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:47:58.717146   28147 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:47:58.718541   28147 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:47:58.719976   28147 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 10:47:58.721456   28147 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:47:58.723255   28147 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:47:58.723402   28147 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:47:58.723851   28147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:47:58.723892   28147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:47:58.738873   28147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34949
	I0610 10:47:58.739425   28147 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:47:58.740014   28147 main.go:141] libmachine: Using API Version  1
	I0610 10:47:58.740033   28147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:47:58.740446   28147 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:47:58.740622   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:47:58.779189   28147 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 10:47:58.780705   28147 start.go:297] selected driver: kvm2
	I0610 10:47:58.780720   28147 start.go:901] validating driver "kvm2" against &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:47:58.780863   28147 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:47:58.781229   28147 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:47:58.781314   28147 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 10:47:58.797812   28147 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 10:47:58.798474   28147 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:47:58.798529   28147 cni.go:84] Creating CNI manager for ""
	I0610 10:47:58.798544   28147 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0610 10:47:58.798592   28147 start.go:340] cluster config:
	{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:47:58.798739   28147 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:47:58.800716   28147 out.go:177] * Starting "ha-565925" primary control-plane node in "ha-565925" cluster
	I0610 10:47:58.801916   28147 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:47:58.801957   28147 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 10:47:58.801970   28147 cache.go:56] Caching tarball of preloaded images
	I0610 10:47:58.802042   28147 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:47:58.802058   28147 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:47:58.802217   28147 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:47:58.802503   28147 start.go:360] acquireMachinesLock for ha-565925: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:47:58.802557   28147 start.go:364] duration metric: took 29.094µs to acquireMachinesLock for "ha-565925"
	I0610 10:47:58.802575   28147 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:47:58.802582   28147 fix.go:54] fixHost starting: 
	I0610 10:47:58.802985   28147 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:47:58.803018   28147 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:47:58.817675   28147 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44641
	I0610 10:47:58.818048   28147 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:47:58.818536   28147 main.go:141] libmachine: Using API Version  1
	I0610 10:47:58.818558   28147 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:47:58.818872   28147 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:47:58.819075   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:47:58.819271   28147 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:47:58.820931   28147 fix.go:112] recreateIfNeeded on ha-565925: state=Running err=<nil>
	W0610 10:47:58.820976   28147 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 10:47:58.822954   28147 out.go:177] * Updating the running kvm2 "ha-565925" VM ...
	I0610 10:47:58.824299   28147 machine.go:94] provisionDockerMachine start ...
	I0610 10:47:58.824317   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:47:58.824499   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:58.826736   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:58.827337   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:58.827367   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:58.827517   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:47:58.827684   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:58.827830   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:58.827947   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:47:58.828090   28147 main.go:141] libmachine: Using SSH client type: native
	I0610 10:47:58.828251   28147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:47:58.828262   28147 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 10:47:58.943615   28147 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:47:58.943647   28147 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:47:58.943887   28147 buildroot.go:166] provisioning hostname "ha-565925"
	I0610 10:47:58.943923   28147 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:47:58.944150   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:58.947121   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:58.947504   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:58.947533   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:58.947778   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:47:58.947967   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:58.948149   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:58.948296   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:47:58.948437   28147 main.go:141] libmachine: Using SSH client type: native
	I0610 10:47:58.948594   28147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:47:58.948605   28147 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925 && echo "ha-565925" | sudo tee /etc/hostname
	I0610 10:47:59.080167   28147 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:47:59.080208   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:59.083486   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.083903   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:59.083927   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.084162   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:47:59.084535   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:59.084744   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:59.084891   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:47:59.085053   28147 main.go:141] libmachine: Using SSH client type: native
	I0610 10:47:59.085203   28147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:47:59.085218   28147 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:47:59.201319   28147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:47:59.201355   28147 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:47:59.201383   28147 buildroot.go:174] setting up certificates
	I0610 10:47:59.201395   28147 provision.go:84] configureAuth start
	I0610 10:47:59.201409   28147 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:47:59.201698   28147 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:47:59.204168   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.204526   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:59.204553   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.204725   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:59.207058   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.207444   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:59.207474   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.207581   28147 provision.go:143] copyHostCerts
	I0610 10:47:59.207609   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:47:59.207668   28147 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 10:47:59.207680   28147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:47:59.207778   28147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:47:59.207880   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:47:59.207905   28147 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 10:47:59.207913   28147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:47:59.207954   28147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:47:59.208023   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:47:59.208045   28147 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 10:47:59.208051   28147 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:47:59.208087   28147 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:47:59.208150   28147 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925 san=[127.0.0.1 192.168.39.208 ha-565925 localhost minikube]
	I0610 10:47:59.405927   28147 provision.go:177] copyRemoteCerts
	I0610 10:47:59.405999   28147 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:47:59.406026   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:59.408573   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.408982   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:59.409022   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.409198   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:47:59.409378   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:59.409520   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:47:59.409666   28147 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:47:59.494928   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 10:47:59.495002   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:47:59.518217   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 10:47:59.518270   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0610 10:47:59.541448   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 10:47:59.541506   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 10:47:59.565344   28147 provision.go:87] duration metric: took 363.937855ms to configureAuth
	I0610 10:47:59.565375   28147 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:47:59.565629   28147 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:47:59.565708   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:47:59.568281   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.568606   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:47:59.568629   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:47:59.568853   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:47:59.569080   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:59.569269   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:47:59.569423   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:47:59.569570   28147 main.go:141] libmachine: Using SSH client type: native
	I0610 10:47:59.569748   28147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:47:59.569764   28147 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:49:30.471958   28147 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:49:30.471982   28147 machine.go:97] duration metric: took 1m31.647670075s to provisionDockerMachine
	I0610 10:49:30.471995   28147 start.go:293] postStartSetup for "ha-565925" (driver="kvm2")
	I0610 10:49:30.472006   28147 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:49:30.472027   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.472334   28147 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:49:30.472360   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:49:30.475326   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.475751   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.475781   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.475882   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:49:30.476085   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.476267   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:49:30.476408   28147 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:49:30.564496   28147 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:49:30.568673   28147 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:49:30.568692   28147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:49:30.568759   28147 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:49:30.568839   28147 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 10:49:30.568852   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 10:49:30.568998   28147 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 10:49:30.578028   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:49:30.603638   28147 start.go:296] duration metric: took 131.631226ms for postStartSetup
	I0610 10:49:30.603677   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.603973   28147 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0610 10:49:30.604005   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:49:30.606663   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.607104   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.607142   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.607275   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:49:30.607487   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.607648   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:49:30.607777   28147 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	W0610 10:49:30.690454   28147 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0610 10:49:30.690477   28147 fix.go:56] duration metric: took 1m31.887897095s for fixHost
	I0610 10:49:30.690503   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:49:30.693351   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.693726   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.693748   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.693922   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:49:30.694113   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.694245   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.694394   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:49:30.694600   28147 main.go:141] libmachine: Using SSH client type: native
	I0610 10:49:30.694756   28147 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:49:30.694766   28147 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:49:30.805822   28147 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718016570.778522855
	
	I0610 10:49:30.805847   28147 fix.go:216] guest clock: 1718016570.778522855
	I0610 10:49:30.805855   28147 fix.go:229] Guest: 2024-06-10 10:49:30.778522855 +0000 UTC Remote: 2024-06-10 10:49:30.690484826 +0000 UTC m=+92.017151784 (delta=88.038029ms)
	I0610 10:49:30.805881   28147 fix.go:200] guest clock delta is within tolerance: 88.038029ms
	I0610 10:49:30.805887   28147 start.go:83] releasing machines lock for "ha-565925", held for 1m32.00331847s
	I0610 10:49:30.805918   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.806303   28147 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:49:30.809325   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.809764   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.809791   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.810176   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.810756   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.810934   28147 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:49:30.811026   28147 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:49:30.811074   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:49:30.811122   28147 ssh_runner.go:195] Run: cat /version.json
	I0610 10:49:30.811146   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:49:30.813763   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.814018   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.814098   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.814123   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.814236   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:49:30.814387   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:30.814414   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:30.814531   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.814588   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:49:30.814728   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:49:30.814740   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:49:30.814942   28147 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:49:30.814952   28147 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:49:30.815107   28147 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:49:30.894456   28147 ssh_runner.go:195] Run: systemctl --version
	I0610 10:49:30.927052   28147 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:49:31.088085   28147 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:49:31.094166   28147 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:49:31.094235   28147 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:49:31.103504   28147 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0610 10:49:31.103530   28147 start.go:494] detecting cgroup driver to use...
	I0610 10:49:31.103594   28147 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:49:31.121117   28147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:49:31.134460   28147 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:49:31.134509   28147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:49:31.147585   28147 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:49:31.160577   28147 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:49:31.324758   28147 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:49:31.475301   28147 docker.go:233] disabling docker service ...
	I0610 10:49:31.475379   28147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:49:31.494201   28147 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:49:31.507130   28147 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:49:31.661611   28147 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:49:31.816894   28147 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:49:31.830851   28147 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:49:31.847906   28147 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:49:31.847974   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.857865   28147 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:49:31.857935   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.868261   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.877889   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.887524   28147 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:49:31.897571   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.907284   28147 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.917421   28147 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:49:31.927430   28147 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:49:31.936253   28147 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:49:31.945062   28147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:49:32.082304   28147 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:49:32.348379   28147 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:49:32.348449   28147 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:49:32.354030   28147 start.go:562] Will wait 60s for crictl version
	I0610 10:49:32.354083   28147 ssh_runner.go:195] Run: which crictl
	I0610 10:49:32.357517   28147 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:49:32.389450   28147 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:49:32.389521   28147 ssh_runner.go:195] Run: crio --version
	I0610 10:49:32.416981   28147 ssh_runner.go:195] Run: crio --version
	I0610 10:49:32.446480   28147 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:49:32.447840   28147 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:49:32.450727   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:32.451100   28147 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:49:32.451120   28147 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:49:32.451315   28147 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:49:32.455822   28147 kubeadm.go:877] updating cluster {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 10:49:32.455952   28147 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:49:32.455990   28147 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:49:32.501795   28147 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:49:32.501822   28147 crio.go:433] Images already preloaded, skipping extraction
	I0610 10:49:32.501869   28147 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:49:32.534694   28147 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:49:32.534718   28147 cache_images.go:84] Images are preloaded, skipping loading
	I0610 10:49:32.534727   28147 kubeadm.go:928] updating node { 192.168.39.208 8443 v1.30.1 crio true true} ...
	I0610 10:49:32.534838   28147 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:49:32.534917   28147 ssh_runner.go:195] Run: crio config
	I0610 10:49:32.585114   28147 cni.go:84] Creating CNI manager for ""
	I0610 10:49:32.585133   28147 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0610 10:49:32.585142   28147 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 10:49:32.585159   28147 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565925 NodeName:ha-565925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 10:49:32.585287   28147 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 10:49:32.585304   28147 kube-vip.go:115] generating kube-vip config ...
	I0610 10:49:32.585340   28147 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 10:49:32.596225   28147 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 10:49:32.596343   28147 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 10:49:32.596393   28147 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:49:32.605608   28147 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 10:49:32.605678   28147 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0610 10:49:32.614853   28147 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0610 10:49:32.630679   28147 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:49:32.645938   28147 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0610 10:49:32.661440   28147 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 10:49:32.678721   28147 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 10:49:32.682373   28147 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:49:32.819983   28147 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:49:32.834903   28147 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.208
	I0610 10:49:32.834933   28147 certs.go:194] generating shared ca certs ...
	I0610 10:49:32.834954   28147 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:49:32.835127   28147 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:49:32.835184   28147 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:49:32.835199   28147 certs.go:256] generating profile certs ...
	I0610 10:49:32.835311   28147 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 10:49:32.835347   28147 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.61a86681
	I0610 10:49:32.835364   28147 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.61a86681 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.230 192.168.39.76 192.168.39.254]
	I0610 10:49:33.111005   28147 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.61a86681 ...
	I0610 10:49:33.111036   28147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.61a86681: {Name:mka6c1e364cfae37b6f112e6f3f1aa66ca53ce26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:49:33.111199   28147 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.61a86681 ...
	I0610 10:49:33.111210   28147 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.61a86681: {Name:mke87619ceb9a196226e8ca7401c9b9faf1c2460 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:49:33.111287   28147 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.61a86681 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 10:49:33.111436   28147 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.61a86681 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 10:49:33.111556   28147 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 10:49:33.111570   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:49:33.111587   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:49:33.111601   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:49:33.111614   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:49:33.111626   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:49:33.111636   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:49:33.111648   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:49:33.111661   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:49:33.111708   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 10:49:33.111732   28147 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 10:49:33.111741   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:49:33.111761   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:49:33.111798   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:49:33.111826   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:49:33.111861   28147 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:49:33.111887   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 10:49:33.111900   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:49:33.111911   28147 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 10:49:33.112511   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:49:33.137337   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:49:33.183120   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:49:33.319014   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:49:33.576256   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0610 10:49:33.837107   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 10:49:33.991745   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:49:34.074155   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 10:49:34.324284   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 10:49:34.425934   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:49:34.478996   28147 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 10:49:34.570588   28147 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 10:49:34.603381   28147 ssh_runner.go:195] Run: openssl version
	I0610 10:49:34.611830   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:49:34.624737   28147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:49:34.629534   28147 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:49:34.629593   28147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:49:34.639439   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:49:34.656478   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 10:49:34.669622   28147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 10:49:34.674096   28147 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 10:49:34.674137   28147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 10:49:34.682665   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 10:49:34.694621   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 10:49:34.707206   28147 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 10:49:34.711553   28147 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 10:49:34.711614   28147 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 10:49:34.717198   28147 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:49:34.728347   28147 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:49:34.732904   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 10:49:34.738974   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 10:49:34.744571   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 10:49:34.751126   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 10:49:34.756784   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 10:49:34.763044   28147 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 10:49:34.768470   28147 kubeadm.go:391] StartCluster: {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.76 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:49:34.768596   28147 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 10:49:34.768658   28147 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 10:49:34.853868   28147 cri.go:89] found id: "6d2fc31bedad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47"
	I0610 10:49:34.853894   28147 cri.go:89] found id: "0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b04258e36921b56cf5"
	I0610 10:49:34.853900   28147 cri.go:89] found id: "d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566"
	I0610 10:49:34.853905   28147 cri.go:89] found id: "ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780"
	I0610 10:49:34.853909   28147 cri.go:89] found id: "d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1"
	I0610 10:49:34.853914   28147 cri.go:89] found id: "a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5"
	I0610 10:49:34.853918   28147 cri.go:89] found id: "10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef"
	I0610 10:49:34.853922   28147 cri.go:89] found id: "a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627"
	I0610 10:49:34.853926   28147 cri.go:89] found id: "6a79c08b543bef005daee1e3690fb18317e89ed3a172dcf8fb66dde1d4969fce"
	I0610 10:49:34.853932   28147 cri.go:89] found id: "a0419ef3f2987d9b8cc906b403eddc48694d814716bf8747432c935276cbaf0b"
	I0610 10:49:34.853936   28147 cri.go:89] found id: "b4e9d0b36913d4db0e9450807b1045c3be90511dfa172cd0b480a4042852bb2e"
	I0610 10:49:34.853940   28147 cri.go:89] found id: "bc4df07252fb45872d41728c3386619b228ccc7df4253b6852eb5655c1661866"
	I0610 10:49:34.853943   28147 cri.go:89] found id: "1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163"
	I0610 10:49:34.853949   28147 cri.go:89] found id: "534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f"
	I0610 10:49:34.853955   28147 cri.go:89] found id: "fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91"
	I0610 10:49:34.853963   28147 cri.go:89] found id: "538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82"
	I0610 10:49:34.853967   28147 cri.go:89] found id: "15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd"
	I0610 10:49:34.853976   28147 cri.go:89] found id: "bcf7ff93de6e7c74b032d544065b02f69bea61c82b2d7cd580d6673506fd0496"
	I0610 10:49:34.853980   28147 cri.go:89] found id: ""
	I0610 10:49:34.854033   28147 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.388403280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016879388377913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82744d5d-43be-4435-b688-5cdafc6e9232 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.388980275Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c87270ee-6c53-4edf-b217-45edfbc9f750 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.389052903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c87270ee-6c53-4edf-b217-45edfbc9f750 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.389529554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718016655830984097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f42a3959512141305a423acbd9e3651a0d52b5082c682b258cd4164bf4c8e22,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718016651830324024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718016615822358433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718016612826583803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718016610822393036,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016607125159409,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718016585794439061,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718016573870529543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2fc31b
edad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718016574022181270,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c409
3e9b04258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573906704187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718016573751897701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718016573784410334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573866821840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\
",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718016573678086400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718016573705740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kuber
netes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016084446209177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930175667446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930144918315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718015925064910752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718015904609428104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718015904613266630,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c87270ee-6c53-4edf-b217-45edfbc9f750 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.436522439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03edc6fc-30f6-4f5b-9a74-e77a7bb430d6 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.436618097Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03edc6fc-30f6-4f5b-9a74-e77a7bb430d6 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.437788036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a533174-add5-420f-a5e9-84ff6de60522 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.438247329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016879438224277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a533174-add5-420f-a5e9-84ff6de60522 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.438934127Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=affea040-7f51-4c6b-a8e2-117f3c9f8bca name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.438990475Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=affea040-7f51-4c6b-a8e2-117f3c9f8bca name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.439689855Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718016655830984097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f42a3959512141305a423acbd9e3651a0d52b5082c682b258cd4164bf4c8e22,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718016651830324024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718016615822358433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718016612826583803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718016610822393036,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016607125159409,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718016585794439061,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718016573870529543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2fc31b
edad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718016574022181270,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c409
3e9b04258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573906704187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718016573751897701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718016573784410334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573866821840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\
",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718016573678086400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718016573705740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kuber
netes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016084446209177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930175667446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930144918315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718015925064910752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718015904609428104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718015904613266630,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=affea040-7f51-4c6b-a8e2-117f3c9f8bca name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.488914628Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5bf24ec1-8c80-4281-aa7f-9c3104b6a8c4 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.489016417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5bf24ec1-8c80-4281-aa7f-9c3104b6a8c4 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.490026947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2cee0740-f018-4e2f-abe0-568a8f0c8570 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.490658304Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016879490634090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2cee0740-f018-4e2f-abe0-568a8f0c8570 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.491315629Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4df939cc-e365-4ffb-9949-394e560d4191 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.491388372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4df939cc-e365-4ffb-9949-394e560d4191 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.491985640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718016655830984097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f42a3959512141305a423acbd9e3651a0d52b5082c682b258cd4164bf4c8e22,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718016651830324024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718016615822358433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718016612826583803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718016610822393036,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016607125159409,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718016585794439061,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718016573870529543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2fc31b
edad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718016574022181270,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c409
3e9b04258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573906704187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718016573751897701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718016573784410334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573866821840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\
",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718016573678086400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718016573705740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kuber
netes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016084446209177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930175667446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930144918315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718015925064910752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718015904609428104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718015904613266630,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4df939cc-e365-4ffb-9949-394e560d4191 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.532868619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3790215-6550-4670-9ac5-46cc22dacbf1 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.532976391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3790215-6550-4670-9ac5-46cc22dacbf1 name=/runtime.v1.RuntimeService/Version
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.534419323Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afbbcdc1-a4aa-4019-80c7-305b9a5da112 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.535033722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718016879534963054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afbbcdc1-a4aa-4019-80c7-305b9a5da112 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.535471242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=779d68d0-f24a-4c4a-86ac-b4235c501e3d name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.535541888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=779d68d0-f24a-4c4a-86ac-b4235c501e3d name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 10:54:39 ha-565925 crio[3904]: time="2024-06-10 10:54:39.536002710Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718016655830984097,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f42a3959512141305a423acbd9e3651a0d52b5082c682b258cd4164bf4c8e22,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718016651830324024,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718016615822358433,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718016612826583803,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa,PodSandboxId:768cff5363857a285258a9ed1604f685fa33d5014b73b34d56f72f72557434f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718016610822393036,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718016607125159409,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718016585794439061,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeri
od: 30,},},&Container{Id:d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718016573870529543,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d2fc31b
edad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47,PodSandboxId:1322b1eb5b92d55d2b0427c212e21c61f03f72a74f50d3d727c725295eaf3c44,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718016574022181270,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c409
3e9b04258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573906704187,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718016573751897701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.con
tainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718016573784410334,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMes
sagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718016573866821840,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\
",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627,PodSandboxId:14111cba76dbad18e6e7a34e19ee1b5d192a8facff2d20aca16a16ad6fc22bf7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718016573678086400,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[
string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef,PodSandboxId:4e5f5234f2b8f6ae4a7073f73c5471b4bcd40a5c30d9f6f34994a1b033dffa5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718016573705740508,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kuber
netes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2874c04d7e6035f0b4f93397eceefa3af883aa2a03dc83be4a8aced86a5e132,PodSandboxId:4f03a24f1c978aee692934393624f50f3f6023665dc034769ec878f8b821ad07,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016084446209177,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kuberne
tes.container.hash: 8230443c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163,PodSandboxId:937195f05576713819cba22da4e17238c7f675cd0d37572dfc6718570bb4938f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930175667446,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f,PodSandboxId:b454f12ed3fe06b7ae98d62eb1932133902e43f1db5bb572871f5eb7765942b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718015930144918315,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91,PodSandboxId:9c2610533ce9301fe46003696bb8fb9ed9f112b3cb0f1a144f0e614826879c22,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f9
9937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718015925064910752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd,PodSandboxId:ae496093662088de763239c043f30d1770c7ce342b51213f0abd2a6d78e5beb7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7
691a75a899,State:CONTAINER_EXITED,CreatedAt:1718015904609428104,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82,PodSandboxId:1c1c2a570436913958921b6806bdea488c57ba8e053d9bc44cde3c1407fe58c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedA
t:1718015904613266630,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=779d68d0-f24a-4c4a-86ac-b4235c501e3d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	30454a419886c       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               4                   1322b1eb5b92d       kindnet-rnn59
	3f42a39595121       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   768cff5363857       storage-provisioner
	895531b30d084       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      4 minutes ago       Running             kube-apiserver            3                   4e5f5234f2b8f       kube-apiserver-ha-565925
	ba05d1801bbb5       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      4 minutes ago       Running             kube-controller-manager   2                   14111cba76dba       kube-controller-manager-ha-565925
	18be5875f033d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   768cff5363857       storage-provisioner
	51e293a1cc869       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   276099ec692d5       busybox-fc5497c4f-6wmkd
	031c3214a1818       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   cfe7af207d454       kube-vip-ha-565925
	6d2fc31bedad8       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      5 minutes ago       Exited              kindnet-cni               3                   1322b1eb5b92d       kindnet-rnn59
	0a358cc1cc573       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   3afe7674416b2       coredns-7db6d8ff4d-wn6nh
	d6b392205cc4d       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      5 minutes ago       Running             kube-proxy                1                   92b6f53b325e0       kube-proxy-wdjhn
	ca1b692a8aa8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   d74bbdd47986b       coredns-7db6d8ff4d-545cf
	d73c4fbf16547       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      5 minutes ago       Running             kube-scheduler            1                   d3e905f6d61a7       kube-scheduler-ha-565925
	a51d5bffe5db4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   38fe7da9f5e49       etcd-ha-565925
	10ce07d12f096       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      5 minutes ago       Exited              kube-apiserver            2                   4e5f5234f2b8f       kube-apiserver-ha-565925
	a35ae66a1bbe3       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      5 minutes ago       Exited              kube-controller-manager   1                   14111cba76dba       kube-controller-manager-ha-565925
	e2874c04d7e60       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   4f03a24f1c978       busybox-fc5497c4f-6wmkd
	1f037e4537f61       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   937195f055767       coredns-7db6d8ff4d-545cf
	534a412f3a743       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   b454f12ed3fe0       coredns-7db6d8ff4d-wn6nh
	fa492285e9f66       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      15 minutes ago      Exited              kube-proxy                0                   9c2610533ce93       kube-proxy-wdjhn
	538119110afb1       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      16 minutes ago      Exited              kube-scheduler            0                   1c1c2a5704369       kube-scheduler-ha-565925
	15b93b06d8221       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   ae49609366208       etcd-ha-565925
	
	
	==> coredns [0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b04258e36921b56cf5] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163] <==
	[INFO] 10.244.1.2:48212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000372595s
	[INFO] 10.244.1.2:38672 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000558623s
	[INFO] 10.244.1.2:39378 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001712401s
	[INFO] 10.244.2.2:60283 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000168931s
	[INFO] 10.244.0.4:44797 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009875834s
	[INFO] 10.244.0.4:48555 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169499s
	[INFO] 10.244.0.4:59395 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177597s
	[INFO] 10.244.1.2:59265 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000530757s
	[INFO] 10.244.1.2:47710 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001604733s
	[INFO] 10.244.1.2:52315 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138586s
	[INFO] 10.244.2.2:55693 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000155911s
	[INFO] 10.244.2.2:58799 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000094891s
	[INFO] 10.244.2.2:42423 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109708s
	[INFO] 10.244.0.4:50874 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000174304s
	[INFO] 10.244.1.2:48744 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098356s
	[INFO] 10.244.1.2:57572 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107588s
	[INFO] 10.244.1.2:43906 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000582793s
	[INFO] 10.244.0.4:36933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083881s
	[INFO] 10.244.0.4:57895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00011453s
	[INFO] 10.244.1.2:33157 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149048s
	[INFO] 10.244.1.2:51327 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000136605s
	[INFO] 10.244.1.2:57659 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126557s
	[INFO] 10.244.2.2:42606 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000153767s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f] <==
	[INFO] 10.244.1.2:56818 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001759713s
	[INFO] 10.244.1.2:38288 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.001994069s
	[INFO] 10.244.1.2:34752 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000150866s
	[INFO] 10.244.1.2:40260 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000146857s
	[INFO] 10.244.2.2:44655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154352s
	[INFO] 10.244.2.2:33459 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001816989s
	[INFO] 10.244.2.2:44738 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000324114s
	[INFO] 10.244.2.2:47736 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000091876s
	[INFO] 10.244.2.2:44490 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001443467s
	[INFO] 10.244.0.4:55625 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175656s
	[INFO] 10.244.0.4:39661 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080931s
	[INFO] 10.244.0.4:50296 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000636942s
	[INFO] 10.244.1.2:38824 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118172s
	[INFO] 10.244.2.2:42842 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000216365s
	[INFO] 10.244.2.2:59068 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00011868s
	[INFO] 10.244.2.2:38486 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000206394s
	[INFO] 10.244.2.2:33649 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110039s
	[INFO] 10.244.0.4:39573 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000202562s
	[INFO] 10.244.0.4:57326 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128886s
	[INFO] 10.244.1.2:39682 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000217002s
	[INFO] 10.244.2.2:39360 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000367518s
	[INFO] 10.244.2.2:55914 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000522453s
	[INFO] 10.244.2.2:54263 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00020711s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:37476->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-565925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T10_38_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:54:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:50:19 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:50:19 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:50:19 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:50:19 +0000   Mon, 10 Jun 2024 10:38:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    ha-565925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 81e39b112b50436db5c7fc16ce8eb53e
	  System UUID:                81e39b11-2b50-436d-b5c7-fc16ce8eb53e
	  Boot ID:                    afd4fe8d-84f7-41ff-9890-dc78b1ff1343
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6wmkd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-545cf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-wn6nh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-565925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-rnn59                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-565925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-565925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-wdjhn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-565925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-565925                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 15m                  kube-proxy       
	  Normal   Starting                 4m23s                kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m                  kubelet          Node ha-565925 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                  kubelet          Node ha-565925 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                  kubelet          Node ha-565925 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                  node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   NodeReady                15m                  kubelet          Node ha-565925 status is now: NodeReady
	  Normal   RegisteredNode           14m                  node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Warning  ContainerGCFailed        5m9s (x2 over 6m9s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m9s                 node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           4m7s                 node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           3m18s                node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	
	
	Name:               ha-565925-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_39_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:39:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:54:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 10:53:16 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 10:53:16 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 10:53:16 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 10:53:16 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-565925-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55a76fcaaea54bebb8694a2ff5e7d2ea
	  System UUID:                55a76fca-aea5-4beb-b869-4a2ff5e7d2ea
	  Boot ID:                    f2031124-7282-4f77-956b-81d80d2807d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8g67g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-565925-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-9jv7q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-565925-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-565925-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-vbgnx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-565925-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-565925-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-565925-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-565925-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-565925-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           14m                    node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-565925-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m42s (x8 over 4m42s)  kubelet          Node ha-565925-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m42s (x8 over 4m42s)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m42s (x7 over 4m42s)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal  NodeNotReady             108s                   node-controller  Node ha-565925-m02 status is now: NodeNotReady
	
	
	Name:               ha-565925-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_41_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:41:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:52:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    ha-565925-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5196e1f9b5684ae78368fe8d66c3d24c
	  System UUID:                5196e1f9-b568-4ae7-8368-fe8d66c3d24c
	  Boot ID:                    fa33354e-1710-42c3-b31e-616fe87f501e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pnv2t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-lkf5b              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-dpsbw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-565925-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-565925-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-565925-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-565925-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           4m8s                   node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           3m19s                  node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-565925-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-565925-m04 has been rebooted, boot id: fa33354e-1710-42c3-b31e-616fe87f501e
	  Normal   NodeReady                2m48s (x2 over 2m48s)  kubelet          Node ha-565925-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s (x2 over 3m30s)   node-controller  Node ha-565925-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.150837] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.061096] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061390] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.176128] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.114890] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.264219] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.909095] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +3.637727] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +0.061637] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.135890] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.082129] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.392312] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.014769] kauditd_printk_skb: 43 callbacks suppressed
	[  +9.917879] kauditd_printk_skb: 21 callbacks suppressed
	[Jun10 10:49] systemd-fstab-generator[3825]: Ignoring "noauto" option for root device
	[  +0.169090] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[  +0.188008] systemd-fstab-generator[3851]: Ignoring "noauto" option for root device
	[  +0.156438] systemd-fstab-generator[3863]: Ignoring "noauto" option for root device
	[  +0.268788] systemd-fstab-generator[3891]: Ignoring "noauto" option for root device
	[  +0.739516] systemd-fstab-generator[3989]: Ignoring "noauto" option for root device
	[ +12.921754] kauditd_printk_skb: 218 callbacks suppressed
	[ +10.073147] kauditd_printk_skb: 1 callbacks suppressed
	[Jun10 10:50] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.065204] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd] <==
	2024/06/10 10:47:59 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-10T10:47:59.717672Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.157438722s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" limit:10000 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-06-10T10:47:59.727283Z","caller":"traceutil/trace.go:171","msg":"trace[2016543740] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; }","duration":"7.167054579s","start":"2024-06-10T10:47:52.560223Z","end":"2024-06-10T10:47:59.727278Z","steps":["trace[2016543740] 'agreement among raft nodes before linearized reading'  (duration: 7.157438296s)"],"step_count":1}
	2024/06/10 10:47:59 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-06-10T10:47:59.859806Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":16210302245861675405,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-06-10T10:47:59.98587Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T10:47:59.985953Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.208:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-10T10:47:59.986031Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"7fe6bf77aaafe0f6","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-06-10T10:47:59.986226Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986285Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986337Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986484Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986541Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986597Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986628Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:47:59.986651Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.986681Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.986719Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.986922Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.986994Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.987053Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.987084Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:47:59.99121Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-06-10T10:47:59.991336Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-06-10T10:47:59.99139Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-565925","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	
	
	==> etcd [a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5] <==
	{"level":"info","ts":"2024-06-10T10:51:55.589843Z","caller":"traceutil/trace.go:171","msg":"trace[1333195628] transaction","detail":"{read_only:false; response_revision:2567; number_of_response:1; }","duration":"212.561626ms","start":"2024-06-10T10:51:55.377245Z","end":"2024-06-10T10:51:55.589807Z","steps":["trace[1333195628] 'process raft request'  (duration: 212.365877ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T10:51:55.590491Z","caller":"traceutil/trace.go:171","msg":"trace[1575856812] linearizableReadLoop","detail":"{readStateIndex:2985; appliedIndex:2986; }","duration":"169.543064ms","start":"2024-06-10T10:51:55.420885Z","end":"2024-06-10T10:51:55.590428Z","steps":["trace[1575856812] 'read index received'  (duration: 169.53759ms)","trace[1575856812] 'applied index is now lower than readState.Index'  (duration: 4.2µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T10:51:55.59097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.000978ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-565925-m03\" ","response":"range_response_count:1 size:5677"}
	{"level":"info","ts":"2024-06-10T10:51:55.591071Z","caller":"traceutil/trace.go:171","msg":"trace[1171253762] range","detail":"{range_begin:/registry/minions/ha-565925-m03; range_end:; response_count:1; response_revision:2567; }","duration":"170.206364ms","start":"2024-06-10T10:51:55.420846Z","end":"2024-06-10T10:51:55.591053Z","steps":["trace[1171253762] 'agreement among raft nodes before linearized reading'  (duration: 169.845182ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:51:55.599093Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.7488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T10:51:55.599162Z","caller":"traceutil/trace.go:171","msg":"trace[1312235231] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2567; }","duration":"168.849387ms","start":"2024-06-10T10:51:55.430299Z","end":"2024-06-10T10:51:55.599149Z","steps":["trace[1312235231] 'agreement among raft nodes before linearized reading'  (duration: 164.303188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T10:52:05.820969Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.76:49036","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-06-10T10:52:05.832558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 switched to configuration voters=(8156306394685010700 9216264208145965302)"}
	{"level":"info","ts":"2024-06-10T10:52:05.834808Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"fb8a78b66dce1ac7","local-member-id":"7fe6bf77aaafe0f6","removed-remote-peer-id":"55cc759d8ab60945","removed-remote-peer-urls":["https://192.168.39.76:2380"]}
	{"level":"info","ts":"2024-06-10T10:52:05.834918Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"55cc759d8ab60945"}
	{"level":"warn","ts":"2024-06-10T10:52:05.835431Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:52:05.83572Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"55cc759d8ab60945"}
	{"level":"warn","ts":"2024-06-10T10:52:05.83619Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:52:05.836396Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:52:05.83988Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"warn","ts":"2024-06-10T10:52:05.840011Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945","error":"context canceled"}
	{"level":"warn","ts":"2024-06-10T10:52:05.840113Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"55cc759d8ab60945","error":"failed to read 55cc759d8ab60945 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-06-10T10:52:05.840153Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"warn","ts":"2024-06-10T10:52:05.84028Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945","error":"context canceled"}
	{"level":"info","ts":"2024-06-10T10:52:05.840315Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:52:05.840328Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:52:05.840339Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"7fe6bf77aaafe0f6","removed-remote-peer-id":"55cc759d8ab60945"}
	{"level":"info","ts":"2024-06-10T10:52:05.840391Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"7fe6bf77aaafe0f6","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"55cc759d8ab60945"}
	{"level":"warn","ts":"2024-06-10T10:52:05.849673Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id-stream-handler":"7fe6bf77aaafe0f6","remote-peer-id-from":"55cc759d8ab60945"}
	{"level":"warn","ts":"2024-06-10T10:52:05.866134Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.76:40144","server-name":"","error":"read tcp 192.168.39.208:2380->192.168.39.76:40144: read: connection reset by peer"}
	
	
	==> kernel <==
	 10:54:40 up 16 min,  0 users,  load average: 0.44, 0.45, 0.32
	Linux ha-565925 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3] <==
	I0610 10:53:56.837180       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:54:06.851120       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:54:06.851150       1 main.go:227] handling current node
	I0610 10:54:06.851162       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:54:06.851166       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:54:06.851283       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:54:06.851300       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:54:16.862394       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:54:16.862450       1 main.go:227] handling current node
	I0610 10:54:16.862482       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:54:16.862491       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:54:16.862705       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:54:16.862733       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:54:26.876413       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:54:26.876451       1 main.go:227] handling current node
	I0610 10:54:26.876461       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:54:26.876467       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:54:26.876568       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:54:26.876588       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 10:54:36.884709       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 10:54:36.884783       1 main.go:227] handling current node
	I0610 10:54:36.884921       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 10:54:36.884965       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 10:54:36.885206       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 10:54:36.885227       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [6d2fc31bedad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47] <==
	I0610 10:49:34.580026       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 10:49:44.846194       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0610 10:49:54.853492       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0610 10:49:55.854398       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0610 10:49:57.855204       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0610 10:50:00.857148       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kube-apiserver [10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef] <==
	I0610 10:49:34.359535       1 options.go:221] external host was not specified, using 192.168.39.208
	I0610 10:49:34.361960       1 server.go:148] Version: v1.30.1
	I0610 10:49:34.365190       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:49:35.246965       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0610 10:49:35.263293       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0610 10:49:35.263336       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0610 10:49:35.263547       1 instance.go:299] Using reconciler: lease
	I0610 10:49:35.264009       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0610 10:49:55.243480       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0610 10:49:55.244605       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0610 10:49:55.265519       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2] <==
	I0610 10:50:17.816246       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 10:50:17.898928       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 10:50:17.899005       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 10:50:17.899086       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 10:50:17.901846       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 10:50:17.902077       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 10:50:17.902140       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 10:50:17.902195       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 10:50:17.904366       1 aggregator.go:165] initial CRD sync complete...
	I0610 10:50:17.904414       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 10:50:17.904421       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 10:50:17.904426       1 cache.go:39] Caches are synced for autoregister controller
	I0610 10:50:17.911629       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 10:50:17.914467       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 10:50:17.924454       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 10:50:17.924494       1 policy_source.go:224] refreshing policies
	I0610 10:50:17.994953       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0610 10:50:18.097606       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.76]
	I0610 10:50:18.101311       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 10:50:18.140209       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0610 10:50:18.163711       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0610 10:50:18.813070       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0610 10:50:19.224408       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.208 192.168.39.76]
	W0610 10:50:39.226166       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.208 192.168.39.230]
	W0610 10:52:19.231225       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.208 192.168.39.230]
	
	
	==> kube-controller-manager [a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627] <==
	I0610 10:49:35.753024       1 serving.go:380] Generated self-signed cert in-memory
	I0610 10:49:36.068608       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 10:49:36.068712       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:49:36.070825       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0610 10:49:36.071542       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 10:49:36.071675       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 10:49:36.071819       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0610 10:49:56.272999       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.208:8443/healthz\": dial tcp 192.168.39.208:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d] <==
	I0610 10:52:52.342079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.05911ms"
	I0610 10:52:52.343440       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.253µs"
	I0610 10:52:55.722919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.731506ms"
	I0610 10:52:55.723100       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.888µs"
	I0610 10:53:08.241804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="84.476338ms"
	I0610 10:53:08.241910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.6µs"
	E0610 10:53:10.035799       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565925-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565925-m03"
	E0610 10:53:10.035914       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565925-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565925-m03"
	E0610 10:53:10.035942       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565925-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565925-m03"
	E0610 10:53:10.035966       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565925-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565925-m03"
	E0610 10:53:10.035989       1 gc_controller.go:153] "Failed to get node" err="node \"ha-565925-m03\" not found" logger="pod-garbage-collector-controller" node="ha-565925-m03"
	I0610 10:53:10.049096       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-d44ft"
	I0610 10:53:10.082932       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-d44ft"
	I0610 10:53:10.083884       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-565925-m03"
	I0610 10:53:10.109277       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-565925-m03"
	I0610 10:53:10.109411       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-565925-m03"
	I0610 10:53:10.137163       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-565925-m03"
	I0610 10:53:10.137427       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-565925-m03"
	I0610 10:53:10.170058       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-565925-m03"
	I0610 10:53:10.170174       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-565925-m03"
	I0610 10:53:10.197203       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-565925-m03"
	I0610 10:53:10.197334       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-565925-m03"
	I0610 10:53:10.225629       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-565925-m03"
	I0610 10:53:10.225716       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-9tcng"
	I0610 10:53:10.257733       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-9tcng"
	
	
	==> kube-proxy [d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566] <==
	I0610 10:50:16.480570       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 10:50:16.480704       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 10:50:16.480733       1 server_linux.go:165] "Using iptables Proxier"
	I0610 10:50:16.483458       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 10:50:16.483693       1 server.go:872] "Version info" version="v1.30.1"
	I0610 10:50:16.483731       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:50:16.485415       1 config.go:192] "Starting service config controller"
	I0610 10:50:16.485458       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 10:50:16.485503       1 config.go:101] "Starting endpoint slice config controller"
	I0610 10:50:16.485519       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 10:50:16.486337       1 config.go:319] "Starting node config controller"
	I0610 10:50:16.486367       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0610 10:50:19.481660       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.481945       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483161       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:50:19.483323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:50:19.483590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0610 10:50:20.586480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 10:50:20.885886       1 shared_informer.go:320] Caches are synced for service config
	I0610 10:50:20.886651       1 shared_informer.go:320] Caches are synced for node config
	W0610 10:53:04.252585       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0610 10:53:04.252975       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0610 10:53:04.252979       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91] <==
	E0610 10:46:44.441668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:47.513142       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:47.513201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:47.513147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:47.513269       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:47.513416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:47.513293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:53.659409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:53.659583       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:53.659686       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:53.659629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:46:53.659443       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:46:53.659911       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:02.874609       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:02.874673       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:02.874818       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:02.874864       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:09.018265       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:09.018331       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:18.233358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:18.233460       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1895": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:30.522595       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:30.522912       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1898": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:47:33.593898       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:47:33.594113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&resourceVersion=1912": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82] <==
	W0610 10:47:56.323834       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 10:47:56.323909       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 10:47:56.655790       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 10:47:56.655830       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 10:47:56.805511       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 10:47:56.805649       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 10:47:56.972826       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 10:47:56.972956       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 10:47:56.975078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 10:47:56.975114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 10:47:57.017651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 10:47:57.017727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 10:47:57.058312       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 10:47:57.058400       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 10:47:57.334507       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 10:47:57.334591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 10:47:57.721852       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 10:47:57.721992       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 10:47:57.743513       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 10:47:57.743643       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 10:47:57.756633       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 10:47:57.756855       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 10:47:59.648277       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 10:47:59.648317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 10:47:59.696248       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1] <==
	W0610 10:50:13.171329       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:13.171445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:13.349389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:13.349453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.073188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.073242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.293199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.293274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.389307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.208:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.389425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.208:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.514209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.514616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:15.509656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:15.509725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:17.832639       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 10:50:17.832863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 10:50:17.833061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 10:50:17.833139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 10:50:17.833237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 10:50:17.833265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 10:50:30.277918       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0610 10:52:02.506730       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pnv2t\": pod busybox-fc5497c4f-pnv2t is already assigned to node \"ha-565925-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pnv2t" node="ha-565925-m04"
	E0610 10:52:02.508644       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod fc130e49-4bd9-4d39-86e2-5c9633be05c5(default/busybox-fc5497c4f-pnv2t) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-pnv2t"
	E0610 10:52:02.508944       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pnv2t\": pod busybox-fc5497c4f-pnv2t is already assigned to node \"ha-565925-m04\"" pod="default/busybox-fc5497c4f-pnv2t"
	I0610 10:52:02.510673       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-pnv2t" node="ha-565925-m04"
	
	
	==> kubelet <==
	Jun 10 10:50:54 ha-565925 kubelet[1367]: I0610 10:50:54.274112    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-6wmkd" podStartSLOduration=570.67640809 podStartE2EDuration="9m33.27408245s" podCreationTimestamp="2024-06-10 10:41:21 +0000 UTC" firstStartedPulling="2024-06-10 10:41:21.82200867 +0000 UTC m=+171.156915668" lastFinishedPulling="2024-06-10 10:41:24.419683031 +0000 UTC m=+173.754590028" observedRunningTime="2024-06-10 10:41:25.563101157 +0000 UTC m=+174.898008162" watchObservedRunningTime="2024-06-10 10:50:54.27408245 +0000 UTC m=+743.608989455"
	Jun 10 10:50:55 ha-565925 kubelet[1367]: I0610 10:50:55.810721    1367 scope.go:117] "RemoveContainer" containerID="6d2fc31bedad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47"
	Jun 10 10:51:17 ha-565925 kubelet[1367]: I0610 10:51:17.810140    1367 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-vip-ha-565925" podUID="039ffa3e-aac6-4bdc-a576-0158c7fb283d"
	Jun 10 10:51:17 ha-565925 kubelet[1367]: I0610 10:51:17.828628    1367 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-565925"
	Jun 10 10:51:19 ha-565925 kubelet[1367]: I0610 10:51:19.916173    1367 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-565925" podStartSLOduration=2.916150291 podStartE2EDuration="2.916150291s" podCreationTimestamp="2024-06-10 10:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-10 10:51:19.915605871 +0000 UTC m=+769.250512871" watchObservedRunningTime="2024-06-10 10:51:19.916150291 +0000 UTC m=+769.251057296"
	Jun 10 10:51:30 ha-565925 kubelet[1367]: E0610 10:51:30.830366    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:51:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:51:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:51:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:51:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:52:30 ha-565925 kubelet[1367]: E0610 10:52:30.828843    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:52:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:52:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:52:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:52:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:53:30 ha-565925 kubelet[1367]: E0610 10:53:30.828004    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:53:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:53:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:53:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:53:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 10:54:30 ha-565925 kubelet[1367]: E0610 10:54:30.827844    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 10:54:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 10:54:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 10:54:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 10:54:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 10:54:39.100336   30460 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19046-3880/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565925 -n ha-565925
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (651.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-565925 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0610 10:56:57.914227   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:58:20.959056   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:59:12.453798   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 11:01:57.914038   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 11:04:12.453116   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
ha_test.go:560: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-565925 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 102 (10m49.304972152s)

                                                
                                                
-- stdout --
	* [ha-565925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-565925" primary control-plane node in "ha-565925" cluster
	* Updating the running kvm2 "ha-565925" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-565925-m02" control-plane node in "ha-565925" cluster
	* Updating the running kvm2 "ha-565925-m02" VM ...
	* Found network options:
	  - NO_PROXY=192.168.39.208
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.208
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:54:41.118006   30524 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:54:41.118313   30524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:54:41.118326   30524 out.go:304] Setting ErrFile to fd 2...
	I0610 10:54:41.118331   30524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:54:41.118586   30524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:54:41.119100   30524 out.go:298] Setting JSON to false
	I0610 10:54:41.120030   30524 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2222,"bootTime":1718014659,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:54:41.120088   30524 start.go:139] virtualization: kvm guest
	I0610 10:54:41.122252   30524 out.go:177] * [ha-565925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:54:41.123728   30524 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:54:41.123731   30524 notify.go:220] Checking for updates...
	I0610 10:54:41.125175   30524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:54:41.126614   30524 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:54:41.128031   30524 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:54:41.129312   30524 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 10:54:41.130778   30524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:54:41.132606   30524 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:54:41.133157   30524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:41.133241   30524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:41.148356   30524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0610 10:54:41.148855   30524 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:41.149465   30524 main.go:141] libmachine: Using API Version  1
	I0610 10:54:41.149493   30524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:41.149856   30524 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:41.150063   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.150360   30524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:54:41.150685   30524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:41.150725   30524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:41.166173   30524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0610 10:54:41.166610   30524 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:41.167143   30524 main.go:141] libmachine: Using API Version  1
	I0610 10:54:41.167177   30524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:41.167585   30524 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:41.167745   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.204423   30524 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 10:54:41.205821   30524 start.go:297] selected driver: kvm2
	I0610 10:54:41.205839   30524 start.go:901] validating driver "kvm2" against &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:54:41.206044   30524 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:54:41.206508   30524 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:54:41.206610   30524 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 10:54:41.221453   30524 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 10:54:41.222080   30524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:54:41.222117   30524 cni.go:84] Creating CNI manager for ""
	I0610 10:54:41.222122   30524 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 10:54:41.222169   30524 start.go:340] cluster config:
	{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:54:41.222302   30524 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:54:41.224822   30524 out.go:177] * Starting "ha-565925" primary control-plane node in "ha-565925" cluster
	I0610 10:54:41.226052   30524 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:54:41.226097   30524 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 10:54:41.226110   30524 cache.go:56] Caching tarball of preloaded images
	I0610 10:54:41.226211   30524 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:54:41.226230   30524 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:54:41.226375   30524 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:54:41.226626   30524 start.go:360] acquireMachinesLock for ha-565925: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:54:41.226690   30524 start.go:364] duration metric: took 37.509µs to acquireMachinesLock for "ha-565925"
	I0610 10:54:41.226705   30524 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:54:41.226712   30524 fix.go:54] fixHost starting: 
	I0610 10:54:41.227120   30524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:41.227161   30524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:41.242583   30524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0610 10:54:41.242978   30524 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:41.243450   30524 main.go:141] libmachine: Using API Version  1
	I0610 10:54:41.243475   30524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:41.243758   30524 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:41.243949   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.244094   30524 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:54:41.245612   30524 fix.go:112] recreateIfNeeded on ha-565925: state=Running err=<nil>
	W0610 10:54:41.245647   30524 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 10:54:41.247531   30524 out.go:177] * Updating the running kvm2 "ha-565925" VM ...
	I0610 10:54:41.248547   30524 machine.go:94] provisionDockerMachine start ...
	I0610 10:54:41.248566   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.248752   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.251686   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.252215   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.252246   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.252393   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.252533   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.252678   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.252823   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.253014   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.253203   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.253216   30524 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 10:54:41.373744   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:54:41.373773   30524 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:54:41.374028   30524 buildroot.go:166] provisioning hostname "ha-565925"
	I0610 10:54:41.374051   30524 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:54:41.374251   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.376909   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.377435   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.377469   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.377677   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.377868   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.378048   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.378178   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.378464   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.378656   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.378674   30524 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925 && echo "ha-565925" | sudo tee /etc/hostname
	I0610 10:54:41.508245   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:54:41.508284   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.511418   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.511816   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.511845   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.512073   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.512267   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.512447   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.512583   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.512730   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.512872   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.512888   30524 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:54:41.622194   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:54:41.622225   30524 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:54:41.622254   30524 buildroot.go:174] setting up certificates
	I0610 10:54:41.622265   30524 provision.go:84] configureAuth start
	I0610 10:54:41.622280   30524 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:54:41.622553   30524 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:54:41.625606   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.626048   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.626080   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.626269   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.628920   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.629378   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.629408   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.629544   30524 provision.go:143] copyHostCerts
	I0610 10:54:41.629575   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:54:41.629647   30524 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 10:54:41.629661   30524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:54:41.629736   30524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:54:41.629825   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:54:41.629850   30524 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 10:54:41.629856   30524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:54:41.629892   30524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:54:41.629951   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:54:41.629974   30524 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 10:54:41.629983   30524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:54:41.630016   30524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:54:41.630079   30524 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925 san=[127.0.0.1 192.168.39.208 ha-565925 localhost minikube]
	I0610 10:54:41.796354   30524 provision.go:177] copyRemoteCerts
	I0610 10:54:41.796408   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:54:41.796427   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.799182   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.799580   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.799612   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.799774   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.799964   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.800110   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.800269   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:54:41.886743   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 10:54:41.886807   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:54:41.911979   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 10:54:41.912068   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0610 10:54:41.934784   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 10:54:41.934862   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 10:54:41.965258   30524 provision.go:87] duration metric: took 342.978909ms to configureAuth
	I0610 10:54:41.965284   30524 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:54:41.965557   30524 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:54:41.965650   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.968652   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.969098   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.969127   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.969313   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.969506   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.969658   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.969782   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.969945   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.970089   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.970105   30524 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:56:16.545583   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:56:16.545610   30524 machine.go:97] duration metric: took 1m35.297049726s to provisionDockerMachine
	I0610 10:56:16.545622   30524 start.go:293] postStartSetup for "ha-565925" (driver="kvm2")
	I0610 10:56:16.545634   30524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:56:16.545648   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.545946   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:56:16.545974   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.549506   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.549888   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.549917   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.550060   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.550291   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.550434   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.550585   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:56:16.643222   30524 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:56:16.647268   30524 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:56:16.647298   30524 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:56:16.647386   30524 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:56:16.647463   30524 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 10:56:16.647472   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 10:56:16.647547   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 10:56:16.656115   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:56:16.678422   30524 start.go:296] duration metric: took 132.785526ms for postStartSetup
	I0610 10:56:16.678466   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.678740   30524 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0610 10:56:16.678764   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.681456   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.681793   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.681818   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.682024   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.682194   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.682351   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.682480   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	W0610 10:56:16.766654   30524 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0610 10:56:16.766682   30524 fix.go:56] duration metric: took 1m35.539971634s for fixHost
	I0610 10:56:16.766702   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.769598   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.769916   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.769941   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.770107   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.770306   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.770485   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.770642   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.770836   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:56:16.771025   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:56:16.771036   30524 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 10:56:16.881672   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718016976.851445963
	
	I0610 10:56:16.881699   30524 fix.go:216] guest clock: 1718016976.851445963
	I0610 10:56:16.881706   30524 fix.go:229] Guest: 2024-06-10 10:56:16.851445963 +0000 UTC Remote: 2024-06-10 10:56:16.766689612 +0000 UTC m=+95.683159524 (delta=84.756351ms)
	I0610 10:56:16.881728   30524 fix.go:200] guest clock delta is within tolerance: 84.756351ms
	I0610 10:56:16.881733   30524 start.go:83] releasing machines lock for "ha-565925", held for 1m35.655035273s
	I0610 10:56:16.881753   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.882001   30524 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:56:16.884407   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.884788   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.884813   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.885036   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.885622   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.885800   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.885881   30524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:56:16.885923   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.885974   30524 ssh_runner.go:195] Run: cat /version.json
	I0610 10:56:16.885997   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.888482   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.888507   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.888849   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.888877   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.888905   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.888921   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.889003   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.889176   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.889183   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.889379   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.889382   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.889551   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:56:16.889565   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.889718   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:56:17.011118   30524 ssh_runner.go:195] Run: systemctl --version
	I0610 10:56:17.017131   30524 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:56:17.216081   30524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:56:17.223769   30524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:56:17.223850   30524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:56:17.233465   30524 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0610 10:56:17.233483   30524 start.go:494] detecting cgroup driver to use...
	I0610 10:56:17.233543   30524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:56:17.249240   30524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:56:17.272860   30524 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:56:17.272920   30524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:56:17.286910   30524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:56:17.300438   30524 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:56:17.458186   30524 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:56:17.614805   30524 docker.go:233] disabling docker service ...
	I0610 10:56:17.614876   30524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:56:17.632334   30524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:56:17.647026   30524 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:56:17.806618   30524 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:56:17.960595   30524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:56:17.976431   30524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:56:17.994520   30524 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:56:17.994572   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.005055   30524 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:56:18.005111   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.015347   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.025972   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.035997   30524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:56:18.046374   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.056748   30524 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.067550   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.079015   30524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:56:18.089287   30524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:56:18.098589   30524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:56:18.248797   30524 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:57:52.551485   30524 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.302647129s)
	I0610 10:57:52.551522   30524 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:57:52.551583   30524 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:57:52.557137   30524 start.go:562] Will wait 60s for crictl version
	I0610 10:57:52.557197   30524 ssh_runner.go:195] Run: which crictl
	I0610 10:57:52.560833   30524 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:57:52.602747   30524 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:57:52.602812   30524 ssh_runner.go:195] Run: crio --version
	I0610 10:57:52.632305   30524 ssh_runner.go:195] Run: crio --version
	I0610 10:57:52.663707   30524 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:57:52.664992   30524 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:57:52.667804   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:57:52.668260   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:57:52.668300   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:57:52.668509   30524 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:57:52.673571   30524 kubeadm.go:877] updating cluster {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 10:57:52.673697   30524 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:57:52.673733   30524 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:57:52.722568   30524 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:57:52.722591   30524 crio.go:433] Images already preloaded, skipping extraction
	I0610 10:57:52.722634   30524 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:57:52.758588   30524 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:57:52.758613   30524 cache_images.go:84] Images are preloaded, skipping loading
	I0610 10:57:52.758623   30524 kubeadm.go:928] updating node { 192.168.39.208 8443 v1.30.1 crio true true} ...
	I0610 10:57:52.758735   30524 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:57:52.758813   30524 ssh_runner.go:195] Run: crio config
	I0610 10:57:52.807160   30524 cni.go:84] Creating CNI manager for ""
	I0610 10:57:52.807180   30524 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 10:57:52.807188   30524 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 10:57:52.807207   30524 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565925 NodeName:ha-565925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 10:57:52.807474   30524 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 10:57:52.807497   30524 kube-vip.go:115] generating kube-vip config ...
	I0610 10:57:52.807538   30524 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 10:57:52.821166   30524 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 10:57:52.821266   30524 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 10:57:52.821314   30524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:57:52.830928   30524 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 10:57:52.831003   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0610 10:57:52.840191   30524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0610 10:57:52.856314   30524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:57:52.873456   30524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0610 10:57:52.889534   30524 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 10:57:52.905592   30524 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 10:57:52.909983   30524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:57:53.084746   30524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:57:53.099672   30524 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.208
	I0610 10:57:53.099692   30524 certs.go:194] generating shared ca certs ...
	I0610 10:57:53.099705   30524 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:57:53.099868   30524 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:57:53.099914   30524 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:57:53.099929   30524 certs.go:256] generating profile certs ...
	I0610 10:57:53.100014   30524 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 10:57:53.100051   30524 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615
	I0610 10:57:53.100070   30524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.230 192.168.39.254]
	I0610 10:57:53.273760   30524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615 ...
	I0610 10:57:53.273791   30524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615: {Name:mk79115d7de4bf61379a9c75b6c64a9b4dc80bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:57:53.274014   30524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615 ...
	I0610 10:57:53.274033   30524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615: {Name:mk4d8a4986706bc557549784e21d622fc4d3ed07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:57:53.274155   30524 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 10:57:53.274312   30524 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 10:57:53.274447   30524 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 10:57:53.274463   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:57:53.274477   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:57:53.274492   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:57:53.274507   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:57:53.274521   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:57:53.274536   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:57:53.274550   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:57:53.274564   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:57:53.274613   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 10:57:53.274643   30524 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 10:57:53.274656   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:57:53.274681   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:57:53.274704   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:57:53.274728   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:57:53.274768   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:57:53.274798   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.274814   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.274829   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.275331   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:57:53.300829   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:57:53.324567   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:57:53.350089   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:57:53.374999   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0610 10:57:53.397824   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 10:57:53.421021   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:57:53.446630   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 10:57:53.470414   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:57:53.493000   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 10:57:53.515339   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 10:57:53.537877   30524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 10:57:53.553878   30524 ssh_runner.go:195] Run: openssl version
	I0610 10:57:53.559722   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:57:53.569566   30524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.574152   30524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.574204   30524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.579638   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:57:53.588481   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 10:57:53.598838   30524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.603320   30524 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.603377   30524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.608835   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 10:57:53.617653   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 10:57:53.628558   30524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.633075   30524 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.633128   30524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.638735   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:57:53.648052   30524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:57:53.652519   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 10:57:53.658463   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 10:57:53.664313   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 10:57:53.670045   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 10:57:53.676237   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 10:57:53.681823   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 10:57:53.687578   30524 kubeadm.go:391] StartCluster: {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:57:53.687693   30524 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 10:57:53.687749   30524 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 10:57:53.733352   30524 cri.go:89] found id: "30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3"
	I0610 10:57:53.733379   30524 cri.go:89] found id: "3f42a3959512141305a423acbd9e3651a0d52b5082c682b258cd4164bf4c8e22"
	I0610 10:57:53.733385   30524 cri.go:89] found id: "895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2"
	I0610 10:57:53.733390   30524 cri.go:89] found id: "ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d"
	I0610 10:57:53.733395   30524 cri.go:89] found id: "18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa"
	I0610 10:57:53.733400   30524 cri.go:89] found id: "031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b"
	I0610 10:57:53.733403   30524 cri.go:89] found id: "6d2fc31bedad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47"
	I0610 10:57:53.733407   30524 cri.go:89] found id: "0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b04258e36921b56cf5"
	I0610 10:57:53.733409   30524 cri.go:89] found id: "d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566"
	I0610 10:57:53.733415   30524 cri.go:89] found id: "ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780"
	I0610 10:57:53.733418   30524 cri.go:89] found id: "d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1"
	I0610 10:57:53.733420   30524 cri.go:89] found id: "a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5"
	I0610 10:57:53.733422   30524 cri.go:89] found id: "10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef"
	I0610 10:57:53.733425   30524 cri.go:89] found id: "a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627"
	I0610 10:57:53.733430   30524 cri.go:89] found id: "1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163"
	I0610 10:57:53.733432   30524 cri.go:89] found id: "534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f"
	I0610 10:57:53.733435   30524 cri.go:89] found id: "fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91"
	I0610 10:57:53.733439   30524 cri.go:89] found id: "538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82"
	I0610 10:57:53.733442   30524 cri.go:89] found id: "15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd"
	I0610 10:57:53.733445   30524 cri.go:89] found id: ""
	I0610 10:57:53.733492   30524 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-565925 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio" : exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565925 -n ha-565925
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565925 logs -n 25: (1.58679022s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04:/home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m04 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp testdata/cp-test.txt                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1107448961/001/cp-test_ha-565925-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925:/home/docker/cp-test_ha-565925-m04_ha-565925.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925 sudo cat                                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m02:/home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m02 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03:/home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m03 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565925 node stop m02 -v=7                                                     | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-565925 node start m02 -v=7                                                    | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565925 -v=7                                                           | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-565925 -v=7                                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-565925 --wait=true -v=7                                                    | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:47 UTC | 10 Jun 24 10:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565925                                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:51 UTC |                     |
	| node    | ha-565925 node delete m03 -v=7                                                   | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:52 UTC | 10 Jun 24 10:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-565925 stop -v=7                                                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-565925 --wait=true                                                         | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:54 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:54:41
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:54:41.118006   30524 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:54:41.118313   30524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:54:41.118326   30524 out.go:304] Setting ErrFile to fd 2...
	I0610 10:54:41.118331   30524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:54:41.118586   30524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:54:41.119100   30524 out.go:298] Setting JSON to false
	I0610 10:54:41.120030   30524 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2222,"bootTime":1718014659,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:54:41.120088   30524 start.go:139] virtualization: kvm guest
	I0610 10:54:41.122252   30524 out.go:177] * [ha-565925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:54:41.123728   30524 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:54:41.123731   30524 notify.go:220] Checking for updates...
	I0610 10:54:41.125175   30524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:54:41.126614   30524 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:54:41.128031   30524 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:54:41.129312   30524 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 10:54:41.130778   30524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:54:41.132606   30524 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:54:41.133157   30524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:41.133241   30524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:41.148356   30524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0610 10:54:41.148855   30524 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:41.149465   30524 main.go:141] libmachine: Using API Version  1
	I0610 10:54:41.149493   30524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:41.149856   30524 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:41.150063   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.150360   30524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:54:41.150685   30524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:41.150725   30524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:41.166173   30524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0610 10:54:41.166610   30524 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:41.167143   30524 main.go:141] libmachine: Using API Version  1
	I0610 10:54:41.167177   30524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:41.167585   30524 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:41.167745   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.204423   30524 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 10:54:41.205821   30524 start.go:297] selected driver: kvm2
	I0610 10:54:41.205839   30524 start.go:901] validating driver "kvm2" against &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:54:41.206044   30524 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:54:41.206508   30524 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:54:41.206610   30524 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 10:54:41.221453   30524 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 10:54:41.222080   30524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:54:41.222117   30524 cni.go:84] Creating CNI manager for ""
	I0610 10:54:41.222122   30524 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 10:54:41.222169   30524 start.go:340] cluster config:
	{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:54:41.222302   30524 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:54:41.224822   30524 out.go:177] * Starting "ha-565925" primary control-plane node in "ha-565925" cluster
	I0610 10:54:41.226052   30524 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:54:41.226097   30524 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 10:54:41.226110   30524 cache.go:56] Caching tarball of preloaded images
	I0610 10:54:41.226211   30524 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:54:41.226230   30524 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:54:41.226375   30524 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:54:41.226626   30524 start.go:360] acquireMachinesLock for ha-565925: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:54:41.226690   30524 start.go:364] duration metric: took 37.509µs to acquireMachinesLock for "ha-565925"
	I0610 10:54:41.226705   30524 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:54:41.226712   30524 fix.go:54] fixHost starting: 
	I0610 10:54:41.227120   30524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:41.227161   30524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:41.242583   30524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0610 10:54:41.242978   30524 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:41.243450   30524 main.go:141] libmachine: Using API Version  1
	I0610 10:54:41.243475   30524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:41.243758   30524 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:41.243949   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.244094   30524 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:54:41.245612   30524 fix.go:112] recreateIfNeeded on ha-565925: state=Running err=<nil>
	W0610 10:54:41.245647   30524 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 10:54:41.247531   30524 out.go:177] * Updating the running kvm2 "ha-565925" VM ...
	I0610 10:54:41.248547   30524 machine.go:94] provisionDockerMachine start ...
	I0610 10:54:41.248566   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.248752   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.251686   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.252215   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.252246   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.252393   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.252533   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.252678   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.252823   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.253014   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.253203   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.253216   30524 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 10:54:41.373744   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:54:41.373773   30524 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:54:41.374028   30524 buildroot.go:166] provisioning hostname "ha-565925"
	I0610 10:54:41.374051   30524 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:54:41.374251   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.376909   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.377435   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.377469   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.377677   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.377868   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.378048   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.378178   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.378464   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.378656   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.378674   30524 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925 && echo "ha-565925" | sudo tee /etc/hostname
	I0610 10:54:41.508245   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:54:41.508284   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.511418   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.511816   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.511845   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.512073   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.512267   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.512447   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.512583   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.512730   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.512872   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.512888   30524 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:54:41.622194   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:54:41.622225   30524 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:54:41.622254   30524 buildroot.go:174] setting up certificates
	I0610 10:54:41.622265   30524 provision.go:84] configureAuth start
	I0610 10:54:41.622280   30524 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:54:41.622553   30524 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:54:41.625606   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.626048   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.626080   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.626269   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.628920   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.629378   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.629408   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.629544   30524 provision.go:143] copyHostCerts
	I0610 10:54:41.629575   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:54:41.629647   30524 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 10:54:41.629661   30524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:54:41.629736   30524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:54:41.629825   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:54:41.629850   30524 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 10:54:41.629856   30524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:54:41.629892   30524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:54:41.629951   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:54:41.629974   30524 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 10:54:41.629983   30524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:54:41.630016   30524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:54:41.630079   30524 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925 san=[127.0.0.1 192.168.39.208 ha-565925 localhost minikube]
	I0610 10:54:41.796354   30524 provision.go:177] copyRemoteCerts
	I0610 10:54:41.796408   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:54:41.796427   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.799182   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.799580   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.799612   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.799774   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.799964   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.800110   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.800269   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:54:41.886743   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 10:54:41.886807   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:54:41.911979   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 10:54:41.912068   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0610 10:54:41.934784   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 10:54:41.934862   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 10:54:41.965258   30524 provision.go:87] duration metric: took 342.978909ms to configureAuth
	I0610 10:54:41.965284   30524 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:54:41.965557   30524 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:54:41.965650   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.968652   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.969098   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.969127   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.969313   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.969506   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.969658   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.969782   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.969945   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.970089   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.970105   30524 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:56:16.545583   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:56:16.545610   30524 machine.go:97] duration metric: took 1m35.297049726s to provisionDockerMachine
	I0610 10:56:16.545622   30524 start.go:293] postStartSetup for "ha-565925" (driver="kvm2")
	I0610 10:56:16.545634   30524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:56:16.545648   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.545946   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:56:16.545974   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.549506   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.549888   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.549917   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.550060   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.550291   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.550434   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.550585   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:56:16.643222   30524 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:56:16.647268   30524 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:56:16.647298   30524 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:56:16.647386   30524 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:56:16.647463   30524 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 10:56:16.647472   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 10:56:16.647547   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 10:56:16.656115   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:56:16.678422   30524 start.go:296] duration metric: took 132.785526ms for postStartSetup
	I0610 10:56:16.678466   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.678740   30524 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0610 10:56:16.678764   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.681456   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.681793   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.681818   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.682024   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.682194   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.682351   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.682480   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	W0610 10:56:16.766654   30524 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0610 10:56:16.766682   30524 fix.go:56] duration metric: took 1m35.539971634s for fixHost
	I0610 10:56:16.766702   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.769598   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.769916   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.769941   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.770107   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.770306   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.770485   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.770642   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.770836   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:56:16.771025   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:56:16.771036   30524 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:56:16.881672   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718016976.851445963
	
	I0610 10:56:16.881699   30524 fix.go:216] guest clock: 1718016976.851445963
	I0610 10:56:16.881706   30524 fix.go:229] Guest: 2024-06-10 10:56:16.851445963 +0000 UTC Remote: 2024-06-10 10:56:16.766689612 +0000 UTC m=+95.683159524 (delta=84.756351ms)
	I0610 10:56:16.881728   30524 fix.go:200] guest clock delta is within tolerance: 84.756351ms
	I0610 10:56:16.881733   30524 start.go:83] releasing machines lock for "ha-565925", held for 1m35.655035273s
	I0610 10:56:16.881753   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.882001   30524 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:56:16.884407   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.884788   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.884813   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.885036   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.885622   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.885800   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.885881   30524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:56:16.885923   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.885974   30524 ssh_runner.go:195] Run: cat /version.json
	I0610 10:56:16.885997   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.888482   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.888507   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.888849   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.888877   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.888905   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.888921   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.889003   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.889176   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.889183   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.889379   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.889382   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.889551   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:56:16.889565   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.889718   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:56:17.011118   30524 ssh_runner.go:195] Run: systemctl --version
	I0610 10:56:17.017131   30524 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:56:17.216081   30524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:56:17.223769   30524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:56:17.223850   30524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:56:17.233465   30524 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0610 10:56:17.233483   30524 start.go:494] detecting cgroup driver to use...
	I0610 10:56:17.233543   30524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:56:17.249240   30524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:56:17.272860   30524 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:56:17.272920   30524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:56:17.286910   30524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:56:17.300438   30524 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:56:17.458186   30524 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:56:17.614805   30524 docker.go:233] disabling docker service ...
	I0610 10:56:17.614876   30524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:56:17.632334   30524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:56:17.647026   30524 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:56:17.806618   30524 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:56:17.960595   30524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:56:17.976431   30524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:56:17.994520   30524 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:56:17.994572   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.005055   30524 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:56:18.005111   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.015347   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.025972   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.035997   30524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:56:18.046374   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.056748   30524 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.067550   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.079015   30524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:56:18.089287   30524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:56:18.098589   30524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:56:18.248797   30524 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:57:52.551485   30524 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.302647129s)
	I0610 10:57:52.551522   30524 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:57:52.551583   30524 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:57:52.557137   30524 start.go:562] Will wait 60s for crictl version
	I0610 10:57:52.557197   30524 ssh_runner.go:195] Run: which crictl
	I0610 10:57:52.560833   30524 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:57:52.602747   30524 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:57:52.602812   30524 ssh_runner.go:195] Run: crio --version
	I0610 10:57:52.632305   30524 ssh_runner.go:195] Run: crio --version
	I0610 10:57:52.663707   30524 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:57:52.664992   30524 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:57:52.667804   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:57:52.668260   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:57:52.668300   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:57:52.668509   30524 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:57:52.673571   30524 kubeadm.go:877] updating cluster {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 10:57:52.673697   30524 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:57:52.673733   30524 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:57:52.722568   30524 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:57:52.722591   30524 crio.go:433] Images already preloaded, skipping extraction
	I0610 10:57:52.722634   30524 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:57:52.758588   30524 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:57:52.758613   30524 cache_images.go:84] Images are preloaded, skipping loading
	I0610 10:57:52.758623   30524 kubeadm.go:928] updating node { 192.168.39.208 8443 v1.30.1 crio true true} ...
	I0610 10:57:52.758735   30524 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:57:52.758813   30524 ssh_runner.go:195] Run: crio config
	I0610 10:57:52.807160   30524 cni.go:84] Creating CNI manager for ""
	I0610 10:57:52.807180   30524 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 10:57:52.807188   30524 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 10:57:52.807207   30524 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565925 NodeName:ha-565925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 10:57:52.807474   30524 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 10:57:52.807497   30524 kube-vip.go:115] generating kube-vip config ...
	I0610 10:57:52.807538   30524 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 10:57:52.821166   30524 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 10:57:52.821266   30524 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 10:57:52.821314   30524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:57:52.830928   30524 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 10:57:52.831003   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0610 10:57:52.840191   30524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0610 10:57:52.856314   30524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:57:52.873456   30524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0610 10:57:52.889534   30524 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 10:57:52.905592   30524 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 10:57:52.909983   30524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:57:53.084746   30524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:57:53.099672   30524 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.208
	I0610 10:57:53.099692   30524 certs.go:194] generating shared ca certs ...
	I0610 10:57:53.099705   30524 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:57:53.099868   30524 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:57:53.099914   30524 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:57:53.099929   30524 certs.go:256] generating profile certs ...
	I0610 10:57:53.100014   30524 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 10:57:53.100051   30524 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615
	I0610 10:57:53.100070   30524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.230 192.168.39.254]
	I0610 10:57:53.273760   30524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615 ...
	I0610 10:57:53.273791   30524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615: {Name:mk79115d7de4bf61379a9c75b6c64a9b4dc80bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:57:53.274014   30524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615 ...
	I0610 10:57:53.274033   30524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615: {Name:mk4d8a4986706bc557549784e21d622fc4d3ed07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:57:53.274155   30524 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 10:57:53.274312   30524 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 10:57:53.274447   30524 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 10:57:53.274463   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:57:53.274477   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:57:53.274492   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:57:53.274507   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:57:53.274521   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:57:53.274536   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:57:53.274550   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:57:53.274564   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:57:53.274613   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 10:57:53.274643   30524 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 10:57:53.274656   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:57:53.274681   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:57:53.274704   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:57:53.274728   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:57:53.274768   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:57:53.274798   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.274814   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.274829   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.275331   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:57:53.300829   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:57:53.324567   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:57:53.350089   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:57:53.374999   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0610 10:57:53.397824   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 10:57:53.421021   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:57:53.446630   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 10:57:53.470414   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:57:53.493000   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 10:57:53.515339   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 10:57:53.537877   30524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 10:57:53.553878   30524 ssh_runner.go:195] Run: openssl version
	I0610 10:57:53.559722   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:57:53.569566   30524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.574152   30524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.574204   30524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.579638   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:57:53.588481   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 10:57:53.598838   30524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.603320   30524 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.603377   30524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.608835   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 10:57:53.617653   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 10:57:53.628558   30524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.633075   30524 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.633128   30524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.638735   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:57:53.648052   30524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:57:53.652519   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 10:57:53.658463   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 10:57:53.664313   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 10:57:53.670045   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 10:57:53.676237   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 10:57:53.681823   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 10:57:53.687578   30524 kubeadm.go:391] StartCluster: {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:57:53.687693   30524 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 10:57:53.687749   30524 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 10:57:53.733352   30524 cri.go:89] found id: "30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3"
	I0610 10:57:53.733379   30524 cri.go:89] found id: "3f42a3959512141305a423acbd9e3651a0d52b5082c682b258cd4164bf4c8e22"
	I0610 10:57:53.733385   30524 cri.go:89] found id: "895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2"
	I0610 10:57:53.733390   30524 cri.go:89] found id: "ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d"
	I0610 10:57:53.733395   30524 cri.go:89] found id: "18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa"
	I0610 10:57:53.733400   30524 cri.go:89] found id: "031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b"
	I0610 10:57:53.733403   30524 cri.go:89] found id: "6d2fc31bedad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47"
	I0610 10:57:53.733407   30524 cri.go:89] found id: "0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b04258e36921b56cf5"
	I0610 10:57:53.733409   30524 cri.go:89] found id: "d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566"
	I0610 10:57:53.733415   30524 cri.go:89] found id: "ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780"
	I0610 10:57:53.733418   30524 cri.go:89] found id: "d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1"
	I0610 10:57:53.733420   30524 cri.go:89] found id: "a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5"
	I0610 10:57:53.733422   30524 cri.go:89] found id: "10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef"
	I0610 10:57:53.733425   30524 cri.go:89] found id: "a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627"
	I0610 10:57:53.733430   30524 cri.go:89] found id: "1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163"
	I0610 10:57:53.733432   30524 cri.go:89] found id: "534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f"
	I0610 10:57:53.733435   30524 cri.go:89] found id: "fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91"
	I0610 10:57:53.733439   30524 cri.go:89] found id: "538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82"
	I0610 10:57:53.733442   30524 cri.go:89] found id: "15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd"
	I0610 10:57:53.733445   30524 cri.go:89] found id: ""
	I0610 10:57:53.733492   30524 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.014986108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718017531014948790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87737bd8-42cf-4ed9-9bc7-ea8bed6e6107 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.015668449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=480e3426-01ed-488b-94dc-963fc7ed389b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.015779630Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=480e3426-01ed-488b-94dc-963fc7ed389b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.016184977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc71731db34e54cc482f777258e552da4eb09b06301d22a96d4b5b7a1c09553a,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718017278826933875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7adba8b85d829b73a4b55001ec3a5549587e6b92cba7280bc5042eb1d764a2,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718017260825295406,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307518a974a9d81484017b6def4bcb35734f73f49643e1e3b41a2e1bb4d72619,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718017258826262478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667bd353fdda8be94426be8fb00d6739c3209268ea60a077feb6d24afc39af7,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718017242826163325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596f78d1f9f1a08bb0774a454ecd00ac562ae38017ea807582d9fe153c3ae83,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017149836607469,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5196758907fd1be55dfb4db8fdf71169c2226b54a2688835b92147fbaf8b52,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017149822916335,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ab46f3546bcfed28150552839b3cc283c32cb309a33ebb0ea67459079f5eb,PodSandboxId:20e1ade57d2542a1c7331c6dcfc2127d5be744e132190337c981b0fc4bed8da4,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718017112116718602,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.
kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9700dc3bf19471a12df22302b585640a8bba48b9c13b6f07e34797964a72bf9,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718017078747702884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:dac5139d75fe4e3d41205aa1803b8091a016d26e34b621f388426b4f28c9788f,PodSandboxId:16504243eb24ec6452badeef3694a359b10b881b6cbee11932acfb706fa05569,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718017079128195775,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a6bfc115b83fe8e36c67f3ce6d994b1cce135626a1c3a20165012107bebf06ca,PodSandboxId:868f5b2fa2a9647cf0d9f242ebbb87f7167e73566a4cfd589ec6112e3a3d61c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718017079118362076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b445c2d316f603033fc8e810
ba508bba9398ff7de68e41b686958ee2cb8fcfd,PodSandboxId:b49a011721881d8ce465640daa30b2d69b6cae387aca077c70daa38e2c3cc389,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718017078925256217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbcde3714e14329d6337427e054bc34da36c1a1a94a6aad9cc9ae1b179eebdd,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718017078902111617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c407eac5c82e6b20991f6cfe3e6f662eb2f7cbcc8a79638d675d463c8120dd,PodSandboxId:cea0105c4b4e7225b5371932b06a504c5cbf20c43d948908687c1708dd82410d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718017078803503346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d162210bec339f31d4b24d962ad510c8c5712d5173ea2a82ebe50e463194bf12,PodSandboxId:dd0f08cb4bc7915dd3c4046a654abb28b7711f688615e361aaf3b5a874d439d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718017078580667689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016607125233498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1718016585794533885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b392205cc4da
349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718016573870537430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b0
4258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573906904776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718016573751979920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718016573784490891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573866917347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=480e3426-01ed-488b-94dc-963fc7ed389b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.065698104Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ebbfc9bc-250c-458a-8a4a-ef551904bac4 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.065825500Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ebbfc9bc-250c-458a-8a4a-ef551904bac4 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.067148697Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a317decc-a347-4baa-95ff-c58023d975b0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.067708882Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718017531067682694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a317decc-a347-4baa-95ff-c58023d975b0 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.068892832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efe73cb0-0866-4248-aadf-c127455e4404 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.068951781Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efe73cb0-0866-4248-aadf-c127455e4404 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.069375479Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc71731db34e54cc482f777258e552da4eb09b06301d22a96d4b5b7a1c09553a,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718017278826933875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7adba8b85d829b73a4b55001ec3a5549587e6b92cba7280bc5042eb1d764a2,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718017260825295406,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307518a974a9d81484017b6def4bcb35734f73f49643e1e3b41a2e1bb4d72619,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718017258826262478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667bd353fdda8be94426be8fb00d6739c3209268ea60a077feb6d24afc39af7,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718017242826163325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596f78d1f9f1a08bb0774a454ecd00ac562ae38017ea807582d9fe153c3ae83,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017149836607469,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5196758907fd1be55dfb4db8fdf71169c2226b54a2688835b92147fbaf8b52,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017149822916335,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ab46f3546bcfed28150552839b3cc283c32cb309a33ebb0ea67459079f5eb,PodSandboxId:20e1ade57d2542a1c7331c6dcfc2127d5be744e132190337c981b0fc4bed8da4,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718017112116718602,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.
kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9700dc3bf19471a12df22302b585640a8bba48b9c13b6f07e34797964a72bf9,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718017078747702884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:dac5139d75fe4e3d41205aa1803b8091a016d26e34b621f388426b4f28c9788f,PodSandboxId:16504243eb24ec6452badeef3694a359b10b881b6cbee11932acfb706fa05569,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718017079128195775,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a6bfc115b83fe8e36c67f3ce6d994b1cce135626a1c3a20165012107bebf06ca,PodSandboxId:868f5b2fa2a9647cf0d9f242ebbb87f7167e73566a4cfd589ec6112e3a3d61c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718017079118362076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b445c2d316f603033fc8e810
ba508bba9398ff7de68e41b686958ee2cb8fcfd,PodSandboxId:b49a011721881d8ce465640daa30b2d69b6cae387aca077c70daa38e2c3cc389,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718017078925256217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbcde3714e14329d6337427e054bc34da36c1a1a94a6aad9cc9ae1b179eebdd,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718017078902111617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c407eac5c82e6b20991f6cfe3e6f662eb2f7cbcc8a79638d675d463c8120dd,PodSandboxId:cea0105c4b4e7225b5371932b06a504c5cbf20c43d948908687c1708dd82410d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718017078803503346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d162210bec339f31d4b24d962ad510c8c5712d5173ea2a82ebe50e463194bf12,PodSandboxId:dd0f08cb4bc7915dd3c4046a654abb28b7711f688615e361aaf3b5a874d439d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718017078580667689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016607125233498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1718016585794533885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b392205cc4da
349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718016573870537430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b0
4258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573906904776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718016573751979920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718016573784490891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573866917347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efe73cb0-0866-4248-aadf-c127455e4404 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.114087383Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04d1d5c0-2a54-4059-af91-1c8797af965a name=/runtime.v1.RuntimeService/Version
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.114356609Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04d1d5c0-2a54-4059-af91-1c8797af965a name=/runtime.v1.RuntimeService/Version
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.115703272Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa6fb22b-33ad-4524-8285-9743bade8a7b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.116221586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718017531116198774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa6fb22b-33ad-4524-8285-9743bade8a7b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.116791660Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d090f61-45e8-4792-9ef1-61dbd8cb622b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.116858573Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d090f61-45e8-4792-9ef1-61dbd8cb622b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.117264774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc71731db34e54cc482f777258e552da4eb09b06301d22a96d4b5b7a1c09553a,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718017278826933875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7adba8b85d829b73a4b55001ec3a5549587e6b92cba7280bc5042eb1d764a2,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718017260825295406,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307518a974a9d81484017b6def4bcb35734f73f49643e1e3b41a2e1bb4d72619,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718017258826262478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667bd353fdda8be94426be8fb00d6739c3209268ea60a077feb6d24afc39af7,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718017242826163325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596f78d1f9f1a08bb0774a454ecd00ac562ae38017ea807582d9fe153c3ae83,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017149836607469,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5196758907fd1be55dfb4db8fdf71169c2226b54a2688835b92147fbaf8b52,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017149822916335,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ab46f3546bcfed28150552839b3cc283c32cb309a33ebb0ea67459079f5eb,PodSandboxId:20e1ade57d2542a1c7331c6dcfc2127d5be744e132190337c981b0fc4bed8da4,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718017112116718602,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.
kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9700dc3bf19471a12df22302b585640a8bba48b9c13b6f07e34797964a72bf9,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718017078747702884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:dac5139d75fe4e3d41205aa1803b8091a016d26e34b621f388426b4f28c9788f,PodSandboxId:16504243eb24ec6452badeef3694a359b10b881b6cbee11932acfb706fa05569,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718017079128195775,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a6bfc115b83fe8e36c67f3ce6d994b1cce135626a1c3a20165012107bebf06ca,PodSandboxId:868f5b2fa2a9647cf0d9f242ebbb87f7167e73566a4cfd589ec6112e3a3d61c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718017079118362076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b445c2d316f603033fc8e810
ba508bba9398ff7de68e41b686958ee2cb8fcfd,PodSandboxId:b49a011721881d8ce465640daa30b2d69b6cae387aca077c70daa38e2c3cc389,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718017078925256217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbcde3714e14329d6337427e054bc34da36c1a1a94a6aad9cc9ae1b179eebdd,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718017078902111617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c407eac5c82e6b20991f6cfe3e6f662eb2f7cbcc8a79638d675d463c8120dd,PodSandboxId:cea0105c4b4e7225b5371932b06a504c5cbf20c43d948908687c1708dd82410d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718017078803503346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d162210bec339f31d4b24d962ad510c8c5712d5173ea2a82ebe50e463194bf12,PodSandboxId:dd0f08cb4bc7915dd3c4046a654abb28b7711f688615e361aaf3b5a874d439d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718017078580667689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016607125233498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1718016585794533885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b392205cc4da
349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718016573870537430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b0
4258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573906904776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718016573751979920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718016573784490891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573866917347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d090f61-45e8-4792-9ef1-61dbd8cb622b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.159864142Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69c4c214-955f-464d-ae1a-2da5c77dcfce name=/runtime.v1.RuntimeService/Version
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.159954510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69c4c214-955f-464d-ae1a-2da5c77dcfce name=/runtime.v1.RuntimeService/Version
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.160932651Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0369a8d2-5d56-4dfb-9f1f-dbbbf1e5772c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.161341054Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718017531161320470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0369a8d2-5d56-4dfb-9f1f-dbbbf1e5772c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.161815872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26a10510-6c5a-4ebe-8196-b9d00b1e5d18 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.161874979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26a10510-6c5a-4ebe-8196-b9d00b1e5d18 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:05:31 ha-565925 crio[6561]: time="2024-06-10 11:05:31.162269691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc71731db34e54cc482f777258e552da4eb09b06301d22a96d4b5b7a1c09553a,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718017278826933875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7adba8b85d829b73a4b55001ec3a5549587e6b92cba7280bc5042eb1d764a2,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718017260825295406,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307518a974a9d81484017b6def4bcb35734f73f49643e1e3b41a2e1bb4d72619,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718017258826262478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667bd353fdda8be94426be8fb00d6739c3209268ea60a077feb6d24afc39af7,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718017242826163325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596f78d1f9f1a08bb0774a454ecd00ac562ae38017ea807582d9fe153c3ae83,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017149836607469,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5196758907fd1be55dfb4db8fdf71169c2226b54a2688835b92147fbaf8b52,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017149822916335,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ab46f3546bcfed28150552839b3cc283c32cb309a33ebb0ea67459079f5eb,PodSandboxId:20e1ade57d2542a1c7331c6dcfc2127d5be744e132190337c981b0fc4bed8da4,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718017112116718602,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.
kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9700dc3bf19471a12df22302b585640a8bba48b9c13b6f07e34797964a72bf9,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718017078747702884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:dac5139d75fe4e3d41205aa1803b8091a016d26e34b621f388426b4f28c9788f,PodSandboxId:16504243eb24ec6452badeef3694a359b10b881b6cbee11932acfb706fa05569,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718017079128195775,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a6bfc115b83fe8e36c67f3ce6d994b1cce135626a1c3a20165012107bebf06ca,PodSandboxId:868f5b2fa2a9647cf0d9f242ebbb87f7167e73566a4cfd589ec6112e3a3d61c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718017079118362076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b445c2d316f603033fc8e810
ba508bba9398ff7de68e41b686958ee2cb8fcfd,PodSandboxId:b49a011721881d8ce465640daa30b2d69b6cae387aca077c70daa38e2c3cc389,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718017078925256217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbcde3714e14329d6337427e054bc34da36c1a1a94a6aad9cc9ae1b179eebdd,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718017078902111617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c407eac5c82e6b20991f6cfe3e6f662eb2f7cbcc8a79638d675d463c8120dd,PodSandboxId:cea0105c4b4e7225b5371932b06a504c5cbf20c43d948908687c1708dd82410d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718017078803503346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d162210bec339f31d4b24d962ad510c8c5712d5173ea2a82ebe50e463194bf12,PodSandboxId:dd0f08cb4bc7915dd3c4046a654abb28b7711f688615e361aaf3b5a874d439d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718017078580667689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016607125233498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1718016585794533885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b392205cc4da
349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718016573870537430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b0
4258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573906904776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718016573751979920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718016573784490891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573866917347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26a10510-6c5a-4ebe-8196-b9d00b1e5d18 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dc71731db34e5       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f   4 minutes ago       Running             kindnet-cni               6                   2301576baf44e       kindnet-rnn59
	0a7adba8b85d8       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   4 minutes ago       Running             kube-apiserver            6                   555188fecd027       kube-apiserver-ha-565925
	307518a974a9d       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   4 minutes ago       Running             kube-controller-manager   5                   8777e890e5cc6       kube-controller-manager-ha-565925
	4667bd353fdda       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 minutes ago       Running             storage-provisioner       6                   9384a3551e3f6       storage-provisioner
	9596f78d1f9f1       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   6 minutes ago       Exited              kube-controller-manager   4                   8777e890e5cc6       kube-controller-manager-ha-565925
	df5196758907f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   6 minutes ago       Exited              kube-apiserver            5                   555188fecd027       kube-apiserver-ha-565925
	e14ab46f3546b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   2                   20e1ade57d254       busybox-fc5497c4f-6wmkd
	dac5139d75fe4       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   7 minutes ago       Running             kube-vip                  1                   16504243eb24e       kube-vip-ha-565925
	a6bfc115b83fe       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   7 minutes ago       Running             kube-proxy                2                   868f5b2fa2a96       kube-proxy-wdjhn
	5b445c2d316f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   7 minutes ago       Running             coredns                   2                   b49a011721881       coredns-7db6d8ff4d-wn6nh
	3cbcde3714e14       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f   7 minutes ago       Exited              kindnet-cni               5                   2301576baf44e       kindnet-rnn59
	83c407eac5c82       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 minutes ago       Running             etcd                      2                   cea0105c4b4e7       etcd-ha-565925
	f9700dc3bf194       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   7 minutes ago       Exited              storage-provisioner       5                   9384a3551e3f6       storage-provisioner
	d162210bec339       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   7 minutes ago       Running             kube-scheduler            2                   dd0f08cb4bc79       kube-scheduler-ha-565925
	51e293a1cc869       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   15 minutes ago      Exited              busybox                   1                   276099ec692d5       busybox-fc5497c4f-6wmkd
	031c3214a1818       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   15 minutes ago      Exited              kube-vip                  0                   cfe7af207d454       kube-vip-ha-565925
	0a358cc1cc573       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Exited              coredns                   1                   3afe7674416b2       coredns-7db6d8ff4d-wn6nh
	d6b392205cc4d       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   15 minutes ago      Exited              kube-proxy                1                   92b6f53b325e0       kube-proxy-wdjhn
	ca1b692a8aa8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Exited              coredns                   1                   d74bbdd47986b       coredns-7db6d8ff4d-545cf
	d73c4fbf16547       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   15 minutes ago      Exited              kube-scheduler            1                   d3e905f6d61a7       kube-scheduler-ha-565925
	a51d5bffe5db4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Exited              etcd                      1                   38fe7da9f5e49       etcd-ha-565925
	
	
	==> coredns [0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b04258e36921b56cf5] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5b445c2d316f603033fc8e810ba508bba9398ff7de68e41b686958ee2cb8fcfd] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.9:57768->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.9:57768->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-565925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T10_38_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:05:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:04:52 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:04:52 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:04:52 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:04:52 +0000   Mon, 10 Jun 2024 10:38:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    ha-565925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 81e39b112b50436db5c7fc16ce8eb53e
	  System UUID:                81e39b11-2b50-436d-b5c7-fc16ce8eb53e
	  Boot ID:                    afd4fe8d-84f7-41ff-9890-dc78b1ff1343
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6wmkd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 coredns-7db6d8ff4d-545cf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-wn6nh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-565925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-rnn59                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-565925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-565925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-wdjhn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-565925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-565925                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age               From             Message
	  ----     ------                   ----              ----             -------
	  Normal   Starting                 26m               kube-proxy       
	  Normal   Starting                 6m45s             kube-proxy       
	  Normal   Starting                 15m               kube-proxy       
	  Normal   NodeHasSufficientPID     27m               kubelet          Node ha-565925 status is now: NodeHasSufficientPID
	  Normal   Starting                 27m               kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  27m               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  27m               kubelet          Node ha-565925 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m               kubelet          Node ha-565925 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           26m               node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   NodeReady                26m               kubelet          Node ha-565925 status is now: NodeReady
	  Normal   RegisteredNode           25m               node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           24m               node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           15m               node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           14m               node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           14m               node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Warning  ContainerGCFailed        8m (x5 over 17m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m40s             node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           4m16s             node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	
	
	Name:               ha-565925-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_39_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:39:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:05:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:04:51 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:04:51 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:04:51 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:04:51 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-565925-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55a76fcaaea54bebb8694a2ff5e7d2ea
	  System UUID:                55a76fca-aea5-4beb-b869-4a2ff5e7d2ea
	  Boot ID:                    f2031124-7282-4f77-956b-81d80d2807d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8g67g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 etcd-ha-565925-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         25m
	  kube-system                 kindnet-9jv7q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-apiserver-ha-565925-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-controller-manager-ha-565925-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-proxy-vbgnx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-scheduler-ha-565925-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 kube-vip-ha-565925-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5m42s              kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 25m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node ha-565925-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           25m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   RegisteredNode           25m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   RegisteredNode           24m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   NodeNotReady             22m                node-controller  Node ha-565925-m02 status is now: NodeNotReady
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-565925-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   NodeNotReady             12m                node-controller  Node ha-565925-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        7m33s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m40s              node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   RegisteredNode           4m16s              node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	
	
	Name:               ha-565925-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_41_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:41:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:52:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    ha-565925-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5196e1f9b5684ae78368fe8d66c3d24c
	  System UUID:                5196e1f9-b568-4ae7-8368-fe8d66c3d24c
	  Boot ID:                    fa33354e-1710-42c3-b31e-616fe87f501e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pnv2t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-lkf5b              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-dpsbw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23m                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node ha-565925-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           23m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           23m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           23m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   NodeReady                23m                kubelet          Node ha-565925-m04 status is now: NodeReady
	  Normal   RegisteredNode           15m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)  kubelet          Node ha-565925-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 13m (x2 over 13m)  kubelet          Node ha-565925-m04 has been rebooted, boot id: fa33354e-1710-42c3-b31e-616fe87f501e
	  Normal   NodeReady                13m (x2 over 13m)  kubelet          Node ha-565925-m04 status is now: NodeReady
	  Normal   NodeNotReady             12m (x2 over 14m)  node-controller  Node ha-565925-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           5m40s              node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           4m16s              node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	
	
	==> dmesg <==
	[  +7.135890] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.082129] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.392312] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.014769] kauditd_printk_skb: 43 callbacks suppressed
	[  +9.917879] kauditd_printk_skb: 21 callbacks suppressed
	[Jun10 10:49] systemd-fstab-generator[3825]: Ignoring "noauto" option for root device
	[  +0.169090] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[  +0.188008] systemd-fstab-generator[3851]: Ignoring "noauto" option for root device
	[  +0.156438] systemd-fstab-generator[3863]: Ignoring "noauto" option for root device
	[  +0.268788] systemd-fstab-generator[3891]: Ignoring "noauto" option for root device
	[  +0.739516] systemd-fstab-generator[3989]: Ignoring "noauto" option for root device
	[ +12.921754] kauditd_printk_skb: 218 callbacks suppressed
	[ +10.073147] kauditd_printk_skb: 1 callbacks suppressed
	[Jun10 10:50] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.065204] kauditd_printk_skb: 6 callbacks suppressed
	[Jun10 10:56] systemd-fstab-generator[6466]: Ignoring "noauto" option for root device
	[  +0.159614] systemd-fstab-generator[6478]: Ignoring "noauto" option for root device
	[  +0.189354] systemd-fstab-generator[6492]: Ignoring "noauto" option for root device
	[  +0.153693] systemd-fstab-generator[6504]: Ignoring "noauto" option for root device
	[  +0.292364] systemd-fstab-generator[6532]: Ignoring "noauto" option for root device
	[Jun10 10:57] systemd-fstab-generator[6677]: Ignoring "noauto" option for root device
	[  +0.096364] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.143063] kauditd_printk_skb: 12 callbacks suppressed
	[Jun10 10:58] kauditd_printk_skb: 90 callbacks suppressed
	[ +27.067539] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [83c407eac5c82e6b20991f6cfe3e6f662eb2f7cbcc8a79638d675d463c8120dd] <==
	{"level":"info","ts":"2024-06-10T10:59:36.741675Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-06-10T10:59:36.741711Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became candidate at term 5"}
	{"level":"info","ts":"2024-06-10T10:59:36.741717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgVoteResp from 7fe6bf77aaafe0f6 at term 5"}
	{"level":"info","ts":"2024-06-10T10:59:36.741731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 [logterm: 4, index: 3625] sent MsgVote request to 71310573b672730c at term 5"}
	{"level":"info","ts":"2024-06-10T10:59:36.785366Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7fe6bf77aaafe0f6","to":"71310573b672730c","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-10T10:59:36.785548Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:59:36.785414Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7fe6bf77aaafe0f6","to":"71310573b672730c","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-10T10:59:36.785834Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:59:36.80551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 received MsgVoteResp from 71310573b672730c at term 5"}
	{"level":"info","ts":"2024-06-10T10:59:36.805564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-06-10T10:59:36.80559Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became leader at term 5"}
	{"level":"info","ts":"2024-06-10T10:59:36.805602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7fe6bf77aaafe0f6 elected leader 7fe6bf77aaafe0f6 at term 5"}
	{"level":"info","ts":"2024-06-10T10:59:36.830255Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7fe6bf77aaafe0f6","local-member-attributes":"{Name:ha-565925 ClientURLs:[https://192.168.39.208:2379]}","request-path":"/0/members/7fe6bf77aaafe0f6/attributes","cluster-id":"fb8a78b66dce1ac7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T10:59:36.830295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T10:59:36.830656Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T10:59:36.830702Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T10:59:36.830351Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T10:59:36.835831Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.208:2379"}
	{"level":"info","ts":"2024-06-10T10:59:36.83613Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-06-10T10:59:36.846934Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:41442","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:41442: write: broken pipe"}
	{"level":"warn","ts":"2024-06-10T10:59:36.84925Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:41434","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:41434: write: broken pipe"}
	{"level":"warn","ts":"2024-06-10T10:59:36.850186Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:41456","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:41456: write: broken pipe"}
	{"level":"warn","ts":"2024-06-10T10:59:36.853102Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:52776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-06-10T10:59:36.855651Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:52778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-06-10T10:59:36.860828Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:52782","server-name":"","error":"EOF"}
	
	
	==> etcd [a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5] <==
	{"level":"info","ts":"2024-06-10T10:54:42.177629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 [term 3] starts to transfer leadership to 71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.177669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 sends MsgTimeoutNow to 71310573b672730c immediately as 71310573b672730c already has up-to-date log"}
	{"level":"info","ts":"2024-06-10T10:54:42.180133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 [term: 3] received a MsgVote message with higher term from 71310573b672730c [term: 4]"}
	{"level":"info","ts":"2024-06-10T10:54:42.180187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became follower at term 4"}
	{"level":"info","ts":"2024-06-10T10:54:42.180202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 [logterm: 3, index: 3624, vote: 0] cast MsgVote for 71310573b672730c [logterm: 3, index: 3624] at term 4"}
	{"level":"info","ts":"2024-06-10T10:54:42.180211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7fe6bf77aaafe0f6 lost leader 7fe6bf77aaafe0f6 at term 4"}
	{"level":"info","ts":"2024-06-10T10:54:42.181914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7fe6bf77aaafe0f6 elected leader 71310573b672730c at term 4"}
	{"level":"info","ts":"2024-06-10T10:54:42.278708Z","caller":"etcdserver/server.go:1448","msg":"leadership transfer finished","local-member-id":"7fe6bf77aaafe0f6","old-leader-member-id":"7fe6bf77aaafe0f6","new-leader-member-id":"71310573b672730c","took":"101.126124ms"}
	{"level":"info","ts":"2024-06-10T10:54:42.278948Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"71310573b672730c"}
	{"level":"warn","ts":"2024-06-10T10:54:42.279946Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.280007Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"71310573b672730c"}
	{"level":"warn","ts":"2024-06-10T10:54:42.281365Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.281473Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.281547Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"warn","ts":"2024-06-10T10:54:42.281726Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","error":"context canceled"}
	{"level":"warn","ts":"2024-06-10T10:54:42.281815Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"71310573b672730c","error":"failed to read 71310573b672730c on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-06-10T10:54:42.281874Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"warn","ts":"2024-06-10T10:54:42.281999Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","error":"context canceled"}
	{"level":"info","ts":"2024-06-10T10:54:42.282037Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.282068Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.28884Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"warn","ts":"2024-06-10T10:54:42.289207Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.230:49300","server-name":"","error":"read tcp 192.168.39.208:2380->192.168.39.230:49300: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T10:54:42.289267Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.230:49290","server-name":"","error":"read tcp 192.168.39.208:2380->192.168.39.230:49290: use of closed network connection"}
	{"level":"info","ts":"2024-06-10T10:54:43.289535Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-06-10T10:54:43.289584Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-565925","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	
	
	==> kernel <==
	 11:05:31 up 27 min,  0 users,  load average: 0.25, 0.23, 0.25
	Linux ha-565925 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3cbcde3714e14329d6337427e054bc34da36c1a1a94a6aad9cc9ae1b179eebdd] <==
	I0610 10:57:59.457606       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 10:58:09.681177       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0610 10:58:19.691071       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0610 10:58:20.691876       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0610 10:58:22.693297       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0610 10:58:25.694650       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [dc71731db34e54cc482f777258e552da4eb09b06301d22a96d4b5b7a1c09553a] <==
	I0610 11:04:49.989710       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 11:05:00.004115       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 11:05:00.004146       1 main.go:227] handling current node
	I0610 11:05:00.004157       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 11:05:00.004162       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 11:05:00.004299       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 11:05:00.004317       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 11:05:10.019517       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 11:05:10.019554       1 main.go:227] handling current node
	I0610 11:05:10.019565       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 11:05:10.019570       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 11:05:10.019672       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 11:05:10.019693       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 11:05:20.028027       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 11:05:20.028061       1 main.go:227] handling current node
	I0610 11:05:20.028072       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 11:05:20.028077       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 11:05:20.028239       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 11:05:20.028259       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 11:05:30.035089       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 11:05:30.035244       1 main.go:227] handling current node
	I0610 11:05:30.035282       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 11:05:30.035328       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 11:05:30.035540       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 11:05:30.035588       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0a7adba8b85d829b73a4b55001ec3a5549587e6b92cba7280bc5042eb1d764a2] <==
	I0610 11:01:02.712974       1 naming_controller.go:291] Starting NamingConditionController
	I0610 11:01:02.713016       1 establishing_controller.go:76] Starting EstablishingController
	I0610 11:01:02.713063       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0610 11:01:02.713104       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0610 11:01:02.713135       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0610 11:01:02.805178       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 11:01:02.807179       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 11:01:02.807263       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 11:01:02.807444       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 11:01:02.817685       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 11:01:02.819533       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 11:01:02.819576       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 11:01:02.828948       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 11:01:02.829649       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 11:01:02.829705       1 policy_source.go:224] refreshing policies
	I0610 11:01:02.831245       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 11:01:02.831283       1 aggregator.go:165] initial CRD sync complete...
	I0610 11:01:02.831301       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 11:01:02.831314       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 11:01:02.831331       1 cache.go:39] Caches are synced for autoregister controller
	I0610 11:01:02.877900       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 11:01:03.715623       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0610 11:01:04.045531       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.208 192.168.39.230]
	I0610 11:01:04.047009       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 11:01:04.053899       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [df5196758907fd1be55dfb4db8fdf71169c2226b54a2688835b92147fbaf8b52] <==
	I0610 10:59:10.014270       1 options.go:221] external host was not specified, using 192.168.39.208
	I0610 10:59:10.015144       1 server.go:148] Version: v1.30.1
	I0610 10:59:10.015205       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:59:10.307284       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0610 10:59:10.317527       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0610 10:59:10.319834       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0610 10:59:10.320113       1 instance.go:299] Using reconciler: lease
	I0610 10:59:10.319817       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0610 10:59:30.307560       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0610 10:59:30.307727       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0610 10:59:30.329812       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0610 10:59:30.329828       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [307518a974a9d81484017b6def4bcb35734f73f49643e1e3b41a2e1bb4d72619] <==
	I0610 11:01:15.455918       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0610 11:01:15.455941       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0610 11:01:15.459291       1 shared_informer.go:320] Caches are synced for persistent volume
	I0610 11:01:15.461181       1 shared_informer.go:320] Caches are synced for HPA
	I0610 11:01:15.463528       1 shared_informer.go:320] Caches are synced for attach detach
	I0610 11:01:15.469908       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0610 11:01:15.481854       1 shared_informer.go:320] Caches are synced for taint
	I0610 11:01:15.481984       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0610 11:01:15.502497       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925"
	I0610 11:01:15.502596       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m02"
	I0610 11:01:15.502626       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m04"
	I0610 11:01:15.502655       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0610 11:01:15.533173       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 11:01:15.572282       1 shared_informer.go:320] Caches are synced for daemon sets
	I0610 11:01:15.579672       1 shared_informer.go:320] Caches are synced for stateful set
	I0610 11:01:15.605800       1 shared_informer.go:320] Caches are synced for resource quota
	I0610 11:01:15.620895       1 shared_informer.go:320] Caches are synced for service account
	I0610 11:01:15.640788       1 shared_informer.go:320] Caches are synced for namespace
	I0610 11:01:16.062645       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 11:01:16.103079       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 11:01:16.103172       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 11:01:19.948357       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-b5wq2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-b5wq2\": the object has been modified; please apply your changes to the latest version and try again"
	I0610 11:01:19.949124       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"42df7ab3-0fab-48a9-8edf-d2a6cd96dc74", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-b5wq2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-b5wq2": the object has been modified; please apply your changes to the latest version and try again
	I0610 11:01:19.969018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.375438ms"
	I0610 11:01:19.969202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.299µs"
	
	
	==> kube-controller-manager [9596f78d1f9f1a08bb0774a454ecd00ac562ae38017ea807582d9fe153c3ae83] <==
	I0610 10:59:10.432358       1 serving.go:380] Generated self-signed cert in-memory
	I0610 10:59:10.684166       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 10:59:10.684196       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:59:10.686805       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 10:59:10.686985       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 10:59:10.687020       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 10:59:10.687000       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0610 10:59:31.334956       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.208:8443/healthz\": dial tcp 192.168.39.208:8443: connect: connection refused"
	
	
	==> kube-proxy [a6bfc115b83fe8e36c67f3ce6d994b1cce135626a1c3a20165012107bebf06ca] <==
	W0610 10:59:01.722082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:01.722242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:01.722442       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:59:01.722653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:01.722727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:59:04.793778       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:04.794073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:59:10.937344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:10.937460       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:59:14.010071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:14.010130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:14.010208       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:59:14.010492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:14.010603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:59:26.297384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:26.297528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:26.297682       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:59:35.515090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:35.515236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:38.585911       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:59:38.586150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:38.586728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0610 11:00:04.598963       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:00:20.298775       1 shared_informer.go:320] Caches are synced for node config
	I0610 11:00:25.198886       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566] <==
	I0610 10:50:16.480570       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 10:50:16.480704       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 10:50:16.480733       1 server_linux.go:165] "Using iptables Proxier"
	I0610 10:50:16.483458       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 10:50:16.483693       1 server.go:872] "Version info" version="v1.30.1"
	I0610 10:50:16.483731       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:50:16.485415       1 config.go:192] "Starting service config controller"
	I0610 10:50:16.485458       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 10:50:16.485503       1 config.go:101] "Starting endpoint slice config controller"
	I0610 10:50:16.485519       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 10:50:16.486337       1 config.go:319] "Starting node config controller"
	I0610 10:50:16.486367       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0610 10:50:19.481660       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.481945       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483161       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:50:19.483323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:50:19.483590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0610 10:50:20.586480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 10:50:20.885886       1 shared_informer.go:320] Caches are synced for service config
	I0610 10:50:20.886651       1 shared_informer.go:320] Caches are synced for node config
	W0610 10:53:04.252585       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0610 10:53:04.252975       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0610 10:53:04.252979       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [d162210bec339f31d4b24d962ad510c8c5712d5173ea2a82ebe50e463194bf12] <==
	W0610 11:00:27.009004       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:27.009047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:27.059796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.208:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:27.059839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.208:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:29.112975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.208:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:29.113094       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.208:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:31.301433       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.208:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:31.301478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.208:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:34.520628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.208:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:34.520810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.208:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:37.060630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:37.060669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:37.112601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.208:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:37.112804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.208:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:45.256863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:45.257037       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:45.916588       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:45.916650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:49.584561       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:49.584636       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:51.537079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:51.537193       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:57.757909       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.208:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:57.757987       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.208:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	I0610 11:01:05.365147       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1] <==
	E0610 10:50:13.171445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:13.349389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:13.349453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.073188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.073242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.293199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.293274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.389307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.208:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.389425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.208:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.514209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.514616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:15.509656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:15.509725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:17.832639       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 10:50:17.832863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 10:50:17.833061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 10:50:17.833139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 10:50:17.833237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 10:50:17.833265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 10:50:30.277918       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0610 10:52:02.506730       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pnv2t\": pod busybox-fc5497c4f-pnv2t is already assigned to node \"ha-565925-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pnv2t" node="ha-565925-m04"
	E0610 10:52:02.508644       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod fc130e49-4bd9-4d39-86e2-5c9633be05c5(default/busybox-fc5497c4f-pnv2t) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-pnv2t"
	E0610 10:52:02.508944       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pnv2t\": pod busybox-fc5497c4f-pnv2t is already assigned to node \"ha-565925-m04\"" pod="default/busybox-fc5497c4f-pnv2t"
	I0610 10:52:02.510673       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-pnv2t" node="ha-565925-m04"
	E0610 10:54:42.082619       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 10 11:04:31 ha-565925 kubelet[1367]: E0610 11:04:31.819331    1367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists"
	Jun 10 11:04:31 ha-565925 kubelet[1367]: E0610 11:04:31.819410    1367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:04:31 ha-565925 kubelet[1367]: E0610 11:04:31.819436    1367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:04:31 ha-565925 kubelet[1367]: E0610 11:04:31.819499    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\\\" already exists\"" pod="kube-system/coredns-7db6d8ff4d-545cf" podUID="7564efde-b96c-48b3-b194-bca695f7ae95"
	Jun 10 11:04:46 ha-565925 kubelet[1367]: E0610 11:04:46.821691    1367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists"
	Jun 10 11:04:46 ha-565925 kubelet[1367]: E0610 11:04:46.822302    1367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:04:46 ha-565925 kubelet[1367]: E0610 11:04:46.822342    1367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:04:46 ha-565925 kubelet[1367]: E0610 11:04:46.822428    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\\\" already exists\"" pod="kube-system/coredns-7db6d8ff4d-545cf" podUID="7564efde-b96c-48b3-b194-bca695f7ae95"
	Jun 10 11:05:01 ha-565925 kubelet[1367]: E0610 11:05:01.818023    1367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists"
	Jun 10 11:05:01 ha-565925 kubelet[1367]: E0610 11:05:01.818120    1367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:05:01 ha-565925 kubelet[1367]: E0610 11:05:01.818145    1367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:05:01 ha-565925 kubelet[1367]: E0610 11:05:01.818194    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\\\" already exists\"" pod="kube-system/coredns-7db6d8ff4d-545cf" podUID="7564efde-b96c-48b3-b194-bca695f7ae95"
	Jun 10 11:05:15 ha-565925 kubelet[1367]: E0610 11:05:15.817099    1367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists"
	Jun 10 11:05:15 ha-565925 kubelet[1367]: E0610 11:05:15.817437    1367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:05:15 ha-565925 kubelet[1367]: E0610 11:05:15.817492    1367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:05:15 ha-565925 kubelet[1367]: E0610 11:05:15.817572    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\\\" already exists\"" pod="kube-system/coredns-7db6d8ff4d-545cf" podUID="7564efde-b96c-48b3-b194-bca695f7ae95"
	Jun 10 11:05:26 ha-565925 kubelet[1367]: E0610 11:05:26.819132    1367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists"
	Jun 10 11:05:26 ha-565925 kubelet[1367]: E0610 11:05:26.819458    1367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:05:26 ha-565925 kubelet[1367]: E0610 11:05:26.819579    1367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:05:26 ha-565925 kubelet[1367]: E0610 11:05:26.819650    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\\\" already exists\"" pod="kube-system/coredns-7db6d8ff4d-545cf" podUID="7564efde-b96c-48b3-b194-bca695f7ae95"
	Jun 10 11:05:30 ha-565925 kubelet[1367]: E0610 11:05:30.827540    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:05:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:05:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:05:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:05:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:05:30.726087   32963 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19046-3880/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565925 -n ha-565925
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (651.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (140.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-565925 --control-plane -v=7 --alsologtostderr
E0610 11:06:57.914211   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 11:07:15.499383   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
ha_test.go:605: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p ha-565925 --control-plane -v=7 --alsologtostderr: signal: killed (2m18.195651609s)

                                                
                                                
-- stdout --
	* Adding node m05 to cluster ha-565925 as [worker control-plane]
	* Starting "ha-565925-m05" control-plane node in "ha-565925" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 11:05:33.055553   33088 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:05:33.056023   33088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:05:33.056035   33088 out.go:304] Setting ErrFile to fd 2...
	I0610 11:05:33.056040   33088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:05:33.056218   33088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:05:33.056477   33088 mustload.go:65] Loading cluster: ha-565925
	I0610 11:05:33.056835   33088 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:05:33.057245   33088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:05:33.057284   33088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:05:33.071648   33088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35169
	I0610 11:05:33.072065   33088 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:05:33.072584   33088 main.go:141] libmachine: Using API Version  1
	I0610 11:05:33.072609   33088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:05:33.072986   33088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:05:33.073210   33088 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 11:05:33.074806   33088 host.go:66] Checking if "ha-565925" exists ...
	I0610 11:05:33.075162   33088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:05:33.075203   33088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:05:33.089550   33088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I0610 11:05:33.089975   33088 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:05:33.090508   33088 main.go:141] libmachine: Using API Version  1
	I0610 11:05:33.090540   33088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:05:33.090863   33088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:05:33.091048   33088 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 11:05:33.091532   33088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:05:33.091568   33088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:05:33.105495   33088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39519
	I0610 11:05:33.105869   33088 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:05:33.106260   33088 main.go:141] libmachine: Using API Version  1
	I0610 11:05:33.106278   33088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:05:33.106539   33088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:05:33.106700   33088 main.go:141] libmachine: (ha-565925-m02) Calling .GetState
	I0610 11:05:33.108247   33088 host.go:66] Checking if "ha-565925-m02" exists ...
	I0610 11:05:33.108591   33088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:05:33.108641   33088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:05:33.123862   33088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32969
	I0610 11:05:33.124210   33088 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:05:33.124691   33088 main.go:141] libmachine: Using API Version  1
	I0610 11:05:33.124714   33088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:05:33.125096   33088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:05:33.125316   33088 main.go:141] libmachine: (ha-565925-m02) Calling .DriverName
	I0610 11:05:33.125494   33088 api_server.go:166] Checking apiserver status ...
	I0610 11:05:33.125557   33088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:05:33.125604   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 11:05:33.128470   33088 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 11:05:33.129013   33088 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 11:05:33.129044   33088 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 11:05:33.129240   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 11:05:33.129423   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 11:05:33.129563   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 11:05:33.129684   33088 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 11:05:33.231423   33088 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/8119/cgroup
	W0610 11:05:33.240913   33088 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/8119/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 11:05:33.240988   33088 ssh_runner.go:195] Run: ls
	I0610 11:05:33.245240   33088 api_server.go:253] Checking apiserver healthz at https://192.168.39.208:8443/healthz ...
	I0610 11:05:33.249295   33088 api_server.go:279] https://192.168.39.208:8443/healthz returned 200:
	ok
	I0610 11:05:33.251539   33088 out.go:177] * Adding node m05 to cluster ha-565925 as [worker control-plane]
	I0610 11:05:33.253090   33088 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:05:33.253202   33088 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 11:05:33.254948   33088 out.go:177] * Starting "ha-565925-m05" control-plane node in "ha-565925" cluster
	I0610 11:05:33.256171   33088 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:05:33.256208   33088 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 11:05:33.256220   33088 cache.go:56] Caching tarball of preloaded images
	I0610 11:05:33.256332   33088 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:05:33.256347   33088 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 11:05:33.256432   33088 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 11:05:33.256591   33088 start.go:360] acquireMachinesLock for ha-565925-m05: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:05:33.256648   33088 start.go:364] duration metric: took 37.584µs to acquireMachinesLock for "ha-565925-m05"
	I0610 11:05:33.256665   33088 start.go:93] Provisioning new machine with config: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m05 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m05 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:true Worker:true}
	I0610 11:05:33.256822   33088 start.go:125] createHost starting for "m05" (driver="kvm2")
	I0610 11:05:33.258453   33088 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 11:05:33.258578   33088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:05:33.258607   33088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:05:33.274078   33088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46303
	I0610 11:05:33.274539   33088 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:05:33.275033   33088 main.go:141] libmachine: Using API Version  1
	I0610 11:05:33.275059   33088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:05:33.275342   33088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:05:33.275501   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetMachineName
	I0610 11:05:33.275636   33088 main.go:141] libmachine: (ha-565925-m05) Calling .DriverName
	I0610 11:05:33.275750   33088 start.go:159] libmachine.API.Create for "ha-565925" (driver="kvm2")
	I0610 11:05:33.275779   33088 client.go:168] LocalClient.Create starting
	I0610 11:05:33.275808   33088 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 11:05:33.275845   33088 main.go:141] libmachine: Decoding PEM data...
	I0610 11:05:33.275858   33088 main.go:141] libmachine: Parsing certificate...
	I0610 11:05:33.275912   33088 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 11:05:33.275932   33088 main.go:141] libmachine: Decoding PEM data...
	I0610 11:05:33.275945   33088 main.go:141] libmachine: Parsing certificate...
	I0610 11:05:33.275962   33088 main.go:141] libmachine: Running pre-create checks...
	I0610 11:05:33.275970   33088 main.go:141] libmachine: (ha-565925-m05) Calling .PreCreateCheck
	I0610 11:05:33.276153   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetConfigRaw
	I0610 11:05:33.276486   33088 main.go:141] libmachine: Creating machine...
	I0610 11:05:33.276499   33088 main.go:141] libmachine: (ha-565925-m05) Calling .Create
	I0610 11:05:33.276621   33088 main.go:141] libmachine: (ha-565925-m05) Creating KVM machine...
	I0610 11:05:33.277894   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found existing default KVM network
	I0610 11:05:33.278003   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found existing private KVM network mk-ha-565925
	I0610 11:05:33.278115   33088 main.go:141] libmachine: (ha-565925-m05) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05 ...
	I0610 11:05:33.278139   33088 main.go:141] libmachine: (ha-565925-m05) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 11:05:33.278205   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:33.278110   33124 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:05:33.278333   33088 main.go:141] libmachine: (ha-565925-m05) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 11:05:33.506550   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:33.506421   33124 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05/id_rsa...
	I0610 11:05:33.656758   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:33.656647   33124 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05/ha-565925-m05.rawdisk...
	I0610 11:05:33.656779   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Writing magic tar header
	I0610 11:05:33.656789   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Writing SSH key tar header
	I0610 11:05:33.656844   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:33.656788   33124 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05 ...
	I0610 11:05:33.656913   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05
	I0610 11:05:33.656942   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 11:05:33.656967   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:05:33.656982   33088 main.go:141] libmachine: (ha-565925-m05) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05 (perms=drwx------)
	I0610 11:05:33.656996   33088 main.go:141] libmachine: (ha-565925-m05) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 11:05:33.657002   33088 main.go:141] libmachine: (ha-565925-m05) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 11:05:33.657008   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 11:05:33.657021   33088 main.go:141] libmachine: (ha-565925-m05) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 11:05:33.657036   33088 main.go:141] libmachine: (ha-565925-m05) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 11:05:33.657049   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 11:05:33.657062   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Checking permissions on dir: /home/jenkins
	I0610 11:05:33.657072   33088 main.go:141] libmachine: (ha-565925-m05) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 11:05:33.657078   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Checking permissions on dir: /home
	I0610 11:05:33.657101   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Skipping /home - not owner
	I0610 11:05:33.657121   33088 main.go:141] libmachine: (ha-565925-m05) Creating domain...
	I0610 11:05:33.658130   33088 main.go:141] libmachine: (ha-565925-m05) define libvirt domain using xml: 
	I0610 11:05:33.658152   33088 main.go:141] libmachine: (ha-565925-m05) <domain type='kvm'>
	I0610 11:05:33.658163   33088 main.go:141] libmachine: (ha-565925-m05)   <name>ha-565925-m05</name>
	I0610 11:05:33.658171   33088 main.go:141] libmachine: (ha-565925-m05)   <memory unit='MiB'>2200</memory>
	I0610 11:05:33.658191   33088 main.go:141] libmachine: (ha-565925-m05)   <vcpu>2</vcpu>
	I0610 11:05:33.658208   33088 main.go:141] libmachine: (ha-565925-m05)   <features>
	I0610 11:05:33.658216   33088 main.go:141] libmachine: (ha-565925-m05)     <acpi/>
	I0610 11:05:33.658227   33088 main.go:141] libmachine: (ha-565925-m05)     <apic/>
	I0610 11:05:33.658239   33088 main.go:141] libmachine: (ha-565925-m05)     <pae/>
	I0610 11:05:33.658247   33088 main.go:141] libmachine: (ha-565925-m05)     
	I0610 11:05:33.658253   33088 main.go:141] libmachine: (ha-565925-m05)   </features>
	I0610 11:05:33.658260   33088 main.go:141] libmachine: (ha-565925-m05)   <cpu mode='host-passthrough'>
	I0610 11:05:33.658265   33088 main.go:141] libmachine: (ha-565925-m05)   
	I0610 11:05:33.658271   33088 main.go:141] libmachine: (ha-565925-m05)   </cpu>
	I0610 11:05:33.658277   33088 main.go:141] libmachine: (ha-565925-m05)   <os>
	I0610 11:05:33.658281   33088 main.go:141] libmachine: (ha-565925-m05)     <type>hvm</type>
	I0610 11:05:33.658287   33088 main.go:141] libmachine: (ha-565925-m05)     <boot dev='cdrom'/>
	I0610 11:05:33.658294   33088 main.go:141] libmachine: (ha-565925-m05)     <boot dev='hd'/>
	I0610 11:05:33.658300   33088 main.go:141] libmachine: (ha-565925-m05)     <bootmenu enable='no'/>
	I0610 11:05:33.658305   33088 main.go:141] libmachine: (ha-565925-m05)   </os>
	I0610 11:05:33.658371   33088 main.go:141] libmachine: (ha-565925-m05)   <devices>
	I0610 11:05:33.658412   33088 main.go:141] libmachine: (ha-565925-m05)     <disk type='file' device='cdrom'>
	I0610 11:05:33.658438   33088 main.go:141] libmachine: (ha-565925-m05)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05/boot2docker.iso'/>
	I0610 11:05:33.658450   33088 main.go:141] libmachine: (ha-565925-m05)       <target dev='hdc' bus='scsi'/>
	I0610 11:05:33.658463   33088 main.go:141] libmachine: (ha-565925-m05)       <readonly/>
	I0610 11:05:33.658473   33088 main.go:141] libmachine: (ha-565925-m05)     </disk>
	I0610 11:05:33.658491   33088 main.go:141] libmachine: (ha-565925-m05)     <disk type='file' device='disk'>
	I0610 11:05:33.658511   33088 main.go:141] libmachine: (ha-565925-m05)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 11:05:33.658525   33088 main.go:141] libmachine: (ha-565925-m05)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05/ha-565925-m05.rawdisk'/>
	I0610 11:05:33.658542   33088 main.go:141] libmachine: (ha-565925-m05)       <target dev='hda' bus='virtio'/>
	I0610 11:05:33.658548   33088 main.go:141] libmachine: (ha-565925-m05)     </disk>
	I0610 11:05:33.658558   33088 main.go:141] libmachine: (ha-565925-m05)     <interface type='network'>
	I0610 11:05:33.658574   33088 main.go:141] libmachine: (ha-565925-m05)       <source network='mk-ha-565925'/>
	I0610 11:05:33.658591   33088 main.go:141] libmachine: (ha-565925-m05)       <model type='virtio'/>
	I0610 11:05:33.658599   33088 main.go:141] libmachine: (ha-565925-m05)     </interface>
	I0610 11:05:33.658606   33088 main.go:141] libmachine: (ha-565925-m05)     <interface type='network'>
	I0610 11:05:33.658615   33088 main.go:141] libmachine: (ha-565925-m05)       <source network='default'/>
	I0610 11:05:33.658632   33088 main.go:141] libmachine: (ha-565925-m05)       <model type='virtio'/>
	I0610 11:05:33.658644   33088 main.go:141] libmachine: (ha-565925-m05)     </interface>
	I0610 11:05:33.658654   33088 main.go:141] libmachine: (ha-565925-m05)     <serial type='pty'>
	I0610 11:05:33.658667   33088 main.go:141] libmachine: (ha-565925-m05)       <target port='0'/>
	I0610 11:05:33.658683   33088 main.go:141] libmachine: (ha-565925-m05)     </serial>
	I0610 11:05:33.658696   33088 main.go:141] libmachine: (ha-565925-m05)     <console type='pty'>
	I0610 11:05:33.658707   33088 main.go:141] libmachine: (ha-565925-m05)       <target type='serial' port='0'/>
	I0610 11:05:33.658714   33088 main.go:141] libmachine: (ha-565925-m05)     </console>
	I0610 11:05:33.658724   33088 main.go:141] libmachine: (ha-565925-m05)     <rng model='virtio'>
	I0610 11:05:33.658734   33088 main.go:141] libmachine: (ha-565925-m05)       <backend model='random'>/dev/random</backend>
	I0610 11:05:33.658744   33088 main.go:141] libmachine: (ha-565925-m05)     </rng>
	I0610 11:05:33.658759   33088 main.go:141] libmachine: (ha-565925-m05)     
	I0610 11:05:33.658773   33088 main.go:141] libmachine: (ha-565925-m05)     
	I0610 11:05:33.658785   33088 main.go:141] libmachine: (ha-565925-m05)   </devices>
	I0610 11:05:33.658792   33088 main.go:141] libmachine: (ha-565925-m05) </domain>
	I0610 11:05:33.658802   33088 main.go:141] libmachine: (ha-565925-m05) 
	I0610 11:05:33.665269   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:a7:1e:bb in network default
	I0610 11:05:33.666421   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:33.666908   33088 main.go:141] libmachine: (ha-565925-m05) Ensuring networks are active...
	I0610 11:05:33.667683   33088 main.go:141] libmachine: (ha-565925-m05) Ensuring network default is active
	I0610 11:05:33.667996   33088 main.go:141] libmachine: (ha-565925-m05) Ensuring network mk-ha-565925 is active
	I0610 11:05:33.668395   33088 main.go:141] libmachine: (ha-565925-m05) Getting domain xml...
	I0610 11:05:33.669154   33088 main.go:141] libmachine: (ha-565925-m05) Creating domain...
	I0610 11:05:34.890423   33088 main.go:141] libmachine: (ha-565925-m05) Waiting to get IP...
	I0610 11:05:34.891121   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:34.891475   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:34.891525   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:34.891456   33124 retry.go:31] will retry after 268.820674ms: waiting for machine to come up
	I0610 11:05:35.162033   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:35.162531   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:35.162556   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:35.162486   33124 retry.go:31] will retry after 264.70629ms: waiting for machine to come up
	I0610 11:05:35.428902   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:35.429316   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:35.429344   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:35.429286   33124 retry.go:31] will retry after 417.03248ms: waiting for machine to come up
	I0610 11:05:35.847756   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:35.848285   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:35.848315   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:35.848177   33124 retry.go:31] will retry after 559.9939ms: waiting for machine to come up
	I0610 11:05:36.409623   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:36.410119   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:36.410158   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:36.410076   33124 retry.go:31] will retry after 709.034292ms: waiting for machine to come up
	I0610 11:05:37.120923   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:37.121367   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:37.121394   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:37.121333   33124 retry.go:31] will retry after 744.762213ms: waiting for machine to come up
	I0610 11:05:37.867347   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:37.867788   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:37.867818   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:37.867740   33124 retry.go:31] will retry after 1.133621839s: waiting for machine to come up
	I0610 11:05:39.002997   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:39.003369   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:39.003421   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:39.003320   33124 retry.go:31] will retry after 1.056694054s: waiting for machine to come up
	I0610 11:05:40.061471   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:40.061959   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:40.061990   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:40.061912   33124 retry.go:31] will retry after 1.50638597s: waiting for machine to come up
	I0610 11:05:41.570069   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:41.570531   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:41.570555   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:41.570490   33124 retry.go:31] will retry after 2.043905471s: waiting for machine to come up
	I0610 11:05:43.615918   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:43.616493   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:43.616525   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:43.616421   33124 retry.go:31] will retry after 2.426130995s: waiting for machine to come up
	I0610 11:05:46.044648   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:46.045100   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:46.045118   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:46.045068   33124 retry.go:31] will retry after 3.357722581s: waiting for machine to come up
	I0610 11:05:49.403959   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:49.404414   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find current IP address of domain ha-565925-m05 in network mk-ha-565925
	I0610 11:05:49.404438   33088 main.go:141] libmachine: (ha-565925-m05) DBG | I0610 11:05:49.404374   33124 retry.go:31] will retry after 3.522159126s: waiting for machine to come up
	I0610 11:05:52.929994   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:52.930414   33088 main.go:141] libmachine: (ha-565925-m05) Found IP for machine: 192.168.39.27
	I0610 11:05:52.930440   33088 main.go:141] libmachine: (ha-565925-m05) Reserving static IP address...
	I0610 11:05:52.930450   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has current primary IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:52.930836   33088 main.go:141] libmachine: (ha-565925-m05) DBG | unable to find host DHCP lease matching {name: "ha-565925-m05", mac: "52:54:00:0f:6b:c3", ip: "192.168.39.27"} in network mk-ha-565925
	I0610 11:05:53.005008   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Getting to WaitForSSH function...
	I0610 11:05:53.005039   33088 main.go:141] libmachine: (ha-565925-m05) Reserved static IP address: 192.168.39.27
	I0610 11:05:53.005054   33088 main.go:141] libmachine: (ha-565925-m05) Waiting for SSH to be available...
	I0610 11:05:53.007561   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.008004   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:53.008032   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.008162   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Using SSH client type: external
	I0610 11:05:53.008190   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05/id_rsa (-rw-------)
	I0610 11:05:53.008256   33088 main.go:141] libmachine: (ha-565925-m05) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.27 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 11:05:53.008288   33088 main.go:141] libmachine: (ha-565925-m05) DBG | About to run SSH command:
	I0610 11:05:53.008306   33088 main.go:141] libmachine: (ha-565925-m05) DBG | exit 0
	I0610 11:05:53.137057   33088 main.go:141] libmachine: (ha-565925-m05) DBG | SSH cmd err, output: <nil>: 
	I0610 11:05:53.137428   33088 main.go:141] libmachine: (ha-565925-m05) KVM machine creation complete!
	I0610 11:05:53.137693   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetConfigRaw
	I0610 11:05:53.138246   33088 main.go:141] libmachine: (ha-565925-m05) Calling .DriverName
	I0610 11:05:53.138441   33088 main.go:141] libmachine: (ha-565925-m05) Calling .DriverName
	I0610 11:05:53.138630   33088 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 11:05:53.138644   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetState
	I0610 11:05:53.139851   33088 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 11:05:53.139874   33088 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 11:05:53.139882   33088 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 11:05:53.139892   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHHostname
	I0610 11:05:53.142094   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.142462   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:53.142490   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.142633   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHPort
	I0610 11:05:53.142774   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:53.142870   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:53.143013   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHUsername
	I0610 11:05:53.143155   33088 main.go:141] libmachine: Using SSH client type: native
	I0610 11:05:53.143402   33088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0610 11:05:53.143414   33088 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 11:05:53.244134   33088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:05:53.244157   33088 main.go:141] libmachine: Detecting the provisioner...
	I0610 11:05:53.244168   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHHostname
	I0610 11:05:53.247014   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.247533   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:53.247557   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.247755   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHPort
	I0610 11:05:53.247948   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:53.248142   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:53.248368   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHUsername
	I0610 11:05:53.248584   33088 main.go:141] libmachine: Using SSH client type: native
	I0610 11:05:53.248780   33088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0610 11:05:53.248793   33088 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 11:05:53.349301   33088 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 11:05:53.349408   33088 main.go:141] libmachine: found compatible host: buildroot
	I0610 11:05:53.349419   33088 main.go:141] libmachine: Provisioning with buildroot...
	I0610 11:05:53.349426   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetMachineName
	I0610 11:05:53.349709   33088 buildroot.go:166] provisioning hostname "ha-565925-m05"
	I0610 11:05:53.349737   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetMachineName
	I0610 11:05:53.349927   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHHostname
	I0610 11:05:53.353100   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.353576   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:53.353618   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.353735   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHPort
	I0610 11:05:53.353907   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:53.354064   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:53.354239   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHUsername
	I0610 11:05:53.354402   33088 main.go:141] libmachine: Using SSH client type: native
	I0610 11:05:53.354550   33088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0610 11:05:53.354562   33088 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925-m05 && echo "ha-565925-m05" | sudo tee /etc/hostname
	I0610 11:05:53.469646   33088 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925-m05
	
	I0610 11:05:53.469673   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHHostname
	I0610 11:05:53.472488   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.472939   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:53.472999   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.473234   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHPort
	I0610 11:05:53.473423   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:53.473604   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:53.473814   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHUsername
	I0610 11:05:53.473986   33088 main.go:141] libmachine: Using SSH client type: native
	I0610 11:05:53.474170   33088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0610 11:05:53.474186   33088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925-m05' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925-m05/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925-m05' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:05:53.585958   33088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:05:53.586000   33088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 11:05:53.586024   33088 buildroot.go:174] setting up certificates
	I0610 11:05:53.586034   33088 provision.go:84] configureAuth start
	I0610 11:05:53.586043   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetMachineName
	I0610 11:05:53.586318   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetIP
	I0610 11:05:53.589044   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.589538   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:53.589559   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.589816   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHHostname
	I0610 11:05:53.592041   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.592457   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:53.592496   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.592692   33088 provision.go:143] copyHostCerts
	I0610 11:05:53.592716   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:05:53.592745   33088 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 11:05:53.592753   33088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:05:53.592822   33088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 11:05:53.592934   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:05:53.592984   33088 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 11:05:53.592994   33088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:05:53.593027   33088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 11:05:53.593088   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:05:53.593104   33088 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 11:05:53.593110   33088 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:05:53.593133   33088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 11:05:53.593181   33088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925-m05 san=[127.0.0.1 192.168.39.27 ha-565925-m05 localhost minikube]
	I0610 11:05:53.709242   33088 provision.go:177] copyRemoteCerts
	I0610 11:05:53.709295   33088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:05:53.709315   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHHostname
	I0610 11:05:53.711923   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.712284   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:53.712313   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.712551   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHPort
	I0610 11:05:53.712754   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:53.712920   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHUsername
	I0610 11:05:53.713082   33088 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05/id_rsa Username:docker}
	I0610 11:05:53.790780   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 11:05:53.790844   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:05:53.817337   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 11:05:53.817398   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 11:05:53.843395   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 11:05:53.843459   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 11:05:53.868647   33088 provision.go:87] duration metric: took 282.600493ms to configureAuth
	I0610 11:05:53.868694   33088 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:05:53.869007   33088 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:05:53.869099   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHHostname
	I0610 11:05:53.872194   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.872638   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:53.872662   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:53.872860   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHPort
	I0610 11:05:53.873115   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:53.873319   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:53.873475   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHUsername
	I0610 11:05:53.873639   33088 main.go:141] libmachine: Using SSH client type: native
	I0610 11:05:53.873853   33088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0610 11:05:53.873874   33088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 11:05:54.132432   33088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 11:05:54.132466   33088 main.go:141] libmachine: Checking connection to Docker...
	I0610 11:05:54.132477   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetURL
	I0610 11:05:54.133869   33088 main.go:141] libmachine: (ha-565925-m05) DBG | Using libvirt version 6000000
	I0610 11:05:54.136261   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.136605   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:54.136646   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.136775   33088 main.go:141] libmachine: Docker is up and running!
	I0610 11:05:54.136791   33088 main.go:141] libmachine: Reticulating splines...
	I0610 11:05:54.136798   33088 client.go:171] duration metric: took 20.861009468s to LocalClient.Create
	I0610 11:05:54.136817   33088 start.go:167] duration metric: took 20.861069215s to libmachine.API.Create "ha-565925"
	I0610 11:05:54.136826   33088 start.go:293] postStartSetup for "ha-565925-m05" (driver="kvm2")
	I0610 11:05:54.136835   33088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:05:54.136850   33088 main.go:141] libmachine: (ha-565925-m05) Calling .DriverName
	I0610 11:05:54.137108   33088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:05:54.137133   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHHostname
	I0610 11:05:54.139186   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.139514   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:54.139533   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.139677   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHPort
	I0610 11:05:54.139857   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:54.139989   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHUsername
	I0610 11:05:54.140156   33088 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05/id_rsa Username:docker}
	I0610 11:05:54.218713   33088 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:05:54.223078   33088 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:05:54.223104   33088 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 11:05:54.223185   33088 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 11:05:54.223295   33088 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 11:05:54.223310   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 11:05:54.223417   33088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:05:54.232346   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:05:54.255119   33088 start.go:296] duration metric: took 118.278519ms for postStartSetup
	I0610 11:05:54.255171   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetConfigRaw
	I0610 11:05:54.255779   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetIP
	I0610 11:05:54.258305   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.258705   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:54.258724   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.259032   33088 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 11:05:54.259271   33088 start.go:128] duration metric: took 21.002439125s to createHost
	I0610 11:05:54.259299   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHHostname
	I0610 11:05:54.261858   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.262339   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:54.262365   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.262558   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHPort
	I0610 11:05:54.262805   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:54.262970   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:54.263133   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHUsername
	I0610 11:05:54.263330   33088 main.go:141] libmachine: Using SSH client type: native
	I0610 11:05:54.263544   33088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.27 22 <nil> <nil>}
	I0610 11:05:54.263560   33088 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 11:05:54.361463   33088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718017554.335756474
	
	I0610 11:05:54.361484   33088 fix.go:216] guest clock: 1718017554.335756474
	I0610 11:05:54.361493   33088 fix.go:229] Guest: 2024-06-10 11:05:54.335756474 +0000 UTC Remote: 2024-06-10 11:05:54.259285555 +0000 UTC m=+21.237047924 (delta=76.470919ms)
	I0610 11:05:54.361524   33088 fix.go:200] guest clock delta is within tolerance: 76.470919ms
	I0610 11:05:54.361528   33088 start.go:83] releasing machines lock for "ha-565925-m05", held for 21.104871973s
	I0610 11:05:54.361545   33088 main.go:141] libmachine: (ha-565925-m05) Calling .DriverName
	I0610 11:05:54.361811   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetIP
	I0610 11:05:54.364768   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.365206   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:54.365228   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.365428   33088 main.go:141] libmachine: (ha-565925-m05) Calling .DriverName
	I0610 11:05:54.365918   33088 main.go:141] libmachine: (ha-565925-m05) Calling .DriverName
	I0610 11:05:54.366082   33088 main.go:141] libmachine: (ha-565925-m05) Calling .DriverName
	I0610 11:05:54.366173   33088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:05:54.366209   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHHostname
	I0610 11:05:54.366324   33088 ssh_runner.go:195] Run: systemctl --version
	I0610 11:05:54.366356   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHHostname
	I0610 11:05:54.368864   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.369190   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.369232   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:54.369253   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.369418   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHPort
	I0610 11:05:54.369573   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:54.369599   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:54.369603   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:54.369730   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHPort
	I0610 11:05:54.369892   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHUsername
	I0610 11:05:54.369954   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHKeyPath
	I0610 11:05:54.370049   33088 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05/id_rsa Username:docker}
	I0610 11:05:54.370136   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetSSHUsername
	I0610 11:05:54.370284   33088 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925-m05/id_rsa Username:docker}
	I0610 11:05:54.484637   33088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 11:05:54.649990   33088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 11:05:54.657545   33088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:05:54.657622   33088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:05:54.675426   33088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:05:54.675450   33088 start.go:494] detecting cgroup driver to use...
	I0610 11:05:54.675510   33088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:05:54.693214   33088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:05:54.706110   33088 docker.go:217] disabling cri-docker service (if available) ...
	I0610 11:05:54.706167   33088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 11:05:54.720423   33088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 11:05:54.734100   33088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 11:05:54.860493   33088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 11:05:55.014189   33088 docker.go:233] disabling docker service ...
	I0610 11:05:55.014256   33088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 11:05:55.028057   33088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 11:05:55.041575   33088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 11:05:55.185182   33088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 11:05:55.315731   33088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 11:05:55.331088   33088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:05:55.350002   33088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 11:05:55.350068   33088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:05:55.360500   33088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 11:05:55.360563   33088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:05:55.370904   33088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:05:55.381121   33088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:05:55.392315   33088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:05:55.403930   33088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:05:55.414190   33088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:05:55.430781   33088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:05:55.440858   33088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:05:55.450015   33088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 11:05:55.450070   33088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 11:05:55.463336   33088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:05:55.474657   33088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:05:55.594017   33088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 11:05:55.725622   33088 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 11:05:55.725704   33088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 11:05:55.730131   33088 start.go:562] Will wait 60s for crictl version
	I0610 11:05:55.730201   33088 ssh_runner.go:195] Run: which crictl
	I0610 11:05:55.733782   33088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:05:55.773926   33088 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 11:05:55.774028   33088 ssh_runner.go:195] Run: crio --version
	I0610 11:05:55.801475   33088 ssh_runner.go:195] Run: crio --version
	I0610 11:05:55.832915   33088 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 11:05:55.834308   33088 main.go:141] libmachine: (ha-565925-m05) Calling .GetIP
	I0610 11:05:55.837435   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:55.837867   33088 main.go:141] libmachine: (ha-565925-m05) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0f:6b:c3", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 12:05:46 +0000 UTC Type:0 Mac:52:54:00:0f:6b:c3 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:ha-565925-m05 Clientid:01:52:54:00:0f:6b:c3}
	I0610 11:05:55.837898   33088 main.go:141] libmachine: (ha-565925-m05) DBG | domain ha-565925-m05 has defined IP address 192.168.39.27 and MAC address 52:54:00:0f:6b:c3 in network mk-ha-565925
	I0610 11:05:55.838080   33088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 11:05:55.842268   33088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:05:55.854625   33088 mustload.go:65] Loading cluster: ha-565925
	I0610 11:05:55.854837   33088 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:05:55.855150   33088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:05:55.855187   33088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:05:55.870182   33088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34341
	I0610 11:05:55.870634   33088 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:05:55.871062   33088 main.go:141] libmachine: Using API Version  1
	I0610 11:05:55.871080   33088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:05:55.871406   33088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:05:55.871584   33088 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 11:05:55.873030   33088 host.go:66] Checking if "ha-565925" exists ...
	I0610 11:05:55.873569   33088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:05:55.873621   33088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:05:55.888032   33088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33429
	I0610 11:05:55.888385   33088 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:05:55.888825   33088 main.go:141] libmachine: Using API Version  1
	I0610 11:05:55.888846   33088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:05:55.889195   33088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:05:55.889386   33088 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 11:05:55.889551   33088 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.27
	I0610 11:05:55.889561   33088 certs.go:194] generating shared ca certs ...
	I0610 11:05:55.889577   33088 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:05:55.889689   33088 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 11:05:55.889723   33088 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 11:05:55.889732   33088 certs.go:256] generating profile certs ...
	I0610 11:05:55.889803   33088 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 11:05:55.889827   33088 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.a8c20c38
	I0610 11:05:55.889841   33088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.a8c20c38 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.230 192.168.39.27 192.168.39.254]
	I0610 11:05:56.123620   33088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.a8c20c38 ...
	I0610 11:05:56.123648   33088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.a8c20c38: {Name:mk8e2bc90798fce2de113f888b7fa6298ecc7640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:05:56.123795   33088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.a8c20c38 ...
	I0610 11:05:56.123806   33088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.a8c20c38: {Name:mkcf82457e9813c946934d7960aaea46877228f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:05:56.123872   33088 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.a8c20c38 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 11:05:56.124012   33088 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.a8c20c38 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 11:05:56.124136   33088 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 11:05:56.124150   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 11:05:56.124162   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 11:05:56.124172   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 11:05:56.124185   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 11:05:56.124198   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 11:05:56.124215   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 11:05:56.124232   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 11:05:56.124243   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 11:05:56.124284   33088 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 11:05:56.124309   33088 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 11:05:56.124318   33088 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 11:05:56.124340   33088 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 11:05:56.124365   33088 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 11:05:56.124388   33088 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 11:05:56.124422   33088 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:05:56.124449   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 11:05:56.124462   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:05:56.124476   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 11:05:56.124515   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 11:05:56.127760   33088 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 11:05:56.128179   33088 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 11:05:56.128210   33088 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 11:05:56.128419   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 11:05:56.128606   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 11:05:56.128771   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 11:05:56.128894   33088 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 11:05:56.205237   33088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0610 11:05:56.210417   33088 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0610 11:05:56.226211   33088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0610 11:05:56.230453   33088 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0610 11:05:56.240999   33088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0610 11:05:56.245303   33088 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0610 11:05:56.255531   33088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0610 11:05:56.259584   33088 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0610 11:05:56.270735   33088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0610 11:05:56.274872   33088 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0610 11:05:56.285575   33088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0610 11:05:56.289578   33088 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0610 11:05:56.299551   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:05:56.323125   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:05:56.345987   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:05:56.369344   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 11:05:56.390770   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0610 11:05:56.414223   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 11:05:56.439885   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:05:56.461847   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 11:05:56.485880   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 11:05:56.507272   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:05:56.530113   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 11:05:56.553784   33088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0610 11:05:56.569116   33088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0610 11:05:56.584078   33088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0610 11:05:56.600542   33088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0610 11:05:56.615841   33088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0610 11:05:56.632585   33088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0610 11:05:56.648984   33088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0610 11:05:56.666238   33088 ssh_runner.go:195] Run: openssl version
	I0610 11:05:56.671904   33088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 11:05:56.683687   33088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 11:05:56.687995   33088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:05:56.688044   33088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 11:05:56.693957   33088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 11:05:56.705393   33088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 11:05:56.716751   33088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 11:05:56.721450   33088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:05:56.721502   33088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 11:05:56.726794   33088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:05:56.737980   33088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:05:56.749596   33088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:05:56.753725   33088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:05:56.753775   33088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:05:56.759757   33088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:05:56.771695   33088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:05:56.776189   33088 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 11:05:56.776241   33088 kubeadm.go:928] updating node {m05 192.168.39.27 8443 v1.30.1  true true} ...
	I0610 11:05:56.776340   33088 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925-m05 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:05:56.776366   33088 kube-vip.go:115] generating kube-vip config ...
	I0610 11:05:56.776402   33088 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 11:05:56.791134   33088 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 11:05:56.791260   33088 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 11:05:56.791317   33088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:05:56.800935   33088 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0610 11:05:56.801013   33088 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0610 11:05:56.810292   33088 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0610 11:05:56.810330   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 11:05:56.810348   33088 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0610 11:05:56.810370   33088 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0610 11:05:56.810401   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 11:05:56.810413   33088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:05:56.810422   33088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0610 11:05:56.810456   33088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0610 11:05:56.818784   33088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0610 11:05:56.818816   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0610 11:05:56.818957   33088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0610 11:05:56.818978   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0610 11:05:56.848726   33088 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 11:05:56.848819   33088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0610 11:05:56.965234   33088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0610 11:05:56.965279   33088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0610 11:05:57.690620   33088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0610 11:05:57.699569   33088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0610 11:05:57.714452   33088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:05:57.729836   33088 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 11:05:57.745387   33088 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 11:05:57.748828   33088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:05:57.760631   33088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:05:57.892606   33088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:05:57.908273   33088 host.go:66] Checking if "ha-565925" exists ...
	I0610 11:05:57.908654   33088 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:05:57.908701   33088 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:05:57.924298   33088 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33013
	I0610 11:05:57.924787   33088 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:05:57.925293   33088 main.go:141] libmachine: Using API Version  1
	I0610 11:05:57.925318   33088 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:05:57.925667   33088 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:05:57.925899   33088 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 11:05:57.926085   33088 start.go:316] joinCluster: &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m05 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:fal
se gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:05:57.926255   33088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0610 11:05:57.926276   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 11:05:57.929341   33088 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 11:05:57.929846   33088 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 11:05:57.929874   33088 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 11:05:57.929999   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 11:05:57.930168   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 11:05:57.930361   33088 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 11:05:57.930508   33088 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 11:05:58.097802   33088 start.go:342] trying to join control-plane node "m05" to cluster: &{Name:m05 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:true Worker:true}
	I0610 11:05:58.097863   33088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01b8cm.ey9g9oieg71rtdhy --discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565925-m05 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443"
	I0610 11:06:23.660461   33088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01b8cm.ey9g9oieg71rtdhy --discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-565925-m05 --control-plane --apiserver-advertise-address=192.168.39.27 --apiserver-bind-port=8443": (25.562573106s)
	I0610 11:06:23.660499   33088 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0610 11:06:24.104352   33088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-565925-m05 minikube.k8s.io/updated_at=2024_06_10T11_06_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=ha-565925 minikube.k8s.io/primary=false
	I0610 11:06:24.223863   33088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-565925-m05 node-role.kubernetes.io/control-plane:NoSchedule-
	I0610 11:06:24.346191   33088 start.go:318] duration metric: took 26.420108346s to joinCluster
	I0610 11:06:24.346262   33088 start.go:234] Will wait 6m0s for node &{Name:m05 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:true Worker:true}
	I0610 11:06:24.347980   33088 out.go:177] * Verifying Kubernetes components...
	I0610 11:06:24.346575   33088 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:06:24.349320   33088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:06:24.549113   33088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:06:24.566716   33088 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:06:24.567072   33088 kapi.go:59] client config for ha-565925: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.crt", KeyFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key", CAFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0610 11:06:24.567206   33088 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.208:8443
	I0610 11:06:24.567663   33088 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 11:06:24.567872   33088 node_ready.go:35] waiting up to 6m0s for node "ha-565925-m05" to be "Ready" ...
	I0610 11:06:24.567968   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:24.567981   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:24.567992   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:24.568000   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:24.577260   33088 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 11:06:25.068310   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:25.068334   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:25.068342   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:25.068346   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:25.072430   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:25.568770   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:25.568796   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:25.568808   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:25.568814   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:25.572248   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:26.069031   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:26.069057   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:26.069066   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:26.069072   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:26.072720   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:26.568980   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:26.569006   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:26.569016   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:26.569020   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:26.572129   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:26.572828   33088 node_ready.go:53] node "ha-565925-m05" has status "Ready":"False"
	I0610 11:06:27.068091   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:27.068118   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:27.068128   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:27.068136   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:27.071711   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:27.568515   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:27.568540   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:27.568551   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:27.568558   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:27.572232   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:28.068931   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:28.068971   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:28.068983   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:28.068988   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:28.072119   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:28.568187   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:28.568210   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:28.568218   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:28.568228   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:28.571674   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:29.068870   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:29.068891   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:29.068899   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:29.068904   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:29.072592   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:29.073322   33088 node_ready.go:53] node "ha-565925-m05" has status "Ready":"False"
	I0610 11:06:29.568325   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:29.568348   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:29.568360   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:29.568365   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:29.578052   33088 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0610 11:06:30.068677   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:30.068694   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:30.068700   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:30.068706   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:30.072085   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:30.568152   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:30.568175   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:30.568185   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:30.568190   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:30.571950   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:31.069100   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925-m05
	I0610 11:06:31.069126   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:31.069137   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:31.069144   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:31.072397   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:31.073068   33088 node_ready.go:49] node "ha-565925-m05" has status "Ready":"True"
	I0610 11:06:31.073089   33088 node_ready.go:38] duration metric: took 6.505191593s for node "ha-565925-m05" to be "Ready" ...
	I0610 11:06:31.073097   33088 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:06:31.073183   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods
	I0610 11:06:31.073193   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:31.073201   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:31.073209   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:31.084087   33088 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0610 11:06:31.092784   33088 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace to be "Ready" ...
	I0610 11:06:31.092878   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:31.092888   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:31.092895   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:31.092899   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:31.097413   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:31.098729   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:31.098746   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:31.098752   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:31.098757   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:31.102269   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:31.593169   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:31.593195   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:31.593206   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:31.593215   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:31.596438   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:31.597315   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:31.597331   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:31.597338   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:31.597342   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:31.600234   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:32.093048   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:32.093072   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:32.093083   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:32.093089   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:32.096678   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:32.097345   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:32.097362   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:32.097369   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:32.097373   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:32.100371   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:32.593111   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:32.593133   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:32.593143   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:32.593148   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:32.597107   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:32.597845   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:32.597864   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:32.597873   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:32.597879   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:32.601641   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:33.093908   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:33.093927   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:33.093935   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:33.093940   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:33.097867   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:33.099429   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:33.099450   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:33.099462   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:33.099466   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:33.102760   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:33.103509   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:33.593153   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:33.593179   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:33.593190   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:33.593195   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:33.596584   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:33.597495   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:33.597515   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:33.597525   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:33.597533   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:33.600349   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:34.093184   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:34.093210   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:34.093221   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:34.093226   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:34.097744   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:34.099029   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:34.099049   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:34.099060   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:34.099068   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:34.102157   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:34.593608   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:34.593645   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:34.593654   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:34.593670   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:34.596826   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:34.597772   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:34.597788   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:34.597795   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:34.597798   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:34.600574   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:35.093541   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:35.093562   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:35.093569   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:35.093573   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:35.097089   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:35.097963   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:35.097984   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:35.097993   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:35.097999   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:35.100831   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:35.593347   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:35.593368   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:35.593376   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:35.593381   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:35.596230   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:35.597048   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:35.597062   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:35.597071   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:35.597076   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:35.599501   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:35.600133   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:36.093254   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:36.093275   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:36.093282   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:36.093285   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:36.096914   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:36.097493   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:36.097508   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:36.097515   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:36.097518   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:36.101067   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:36.593117   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:36.593139   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:36.593147   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:36.593153   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:36.596509   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:36.597541   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:36.597558   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:36.597565   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:36.597569   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:36.600208   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:37.094010   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:37.094028   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:37.094037   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:37.094041   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:37.097637   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:37.098454   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:37.098471   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:37.098481   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:37.098486   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:37.101612   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:37.593317   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:37.593336   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:37.593342   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:37.593344   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:37.596413   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:37.597179   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:37.597197   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:37.597205   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:37.597216   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:37.601410   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:37.602042   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:38.093614   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:38.093638   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:38.093647   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:38.093654   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:38.097076   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:38.097844   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:38.097861   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:38.097868   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:38.097872   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:38.101338   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:38.593931   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:38.593951   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:38.593960   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:38.593963   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:38.597030   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:38.597702   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:38.597719   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:38.597725   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:38.597731   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:38.600448   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:39.093404   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:39.093425   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:39.093432   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:39.093436   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:39.096626   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:39.097561   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:39.097577   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:39.097584   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:39.097588   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:39.101728   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:39.593498   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:39.593523   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:39.593533   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:39.593539   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:39.597131   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:39.598171   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:39.598186   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:39.598194   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:39.598199   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:39.601141   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:40.093907   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:40.093931   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:40.093940   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:40.093946   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:40.097171   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:40.097912   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:40.097927   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:40.097934   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:40.097939   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:40.100975   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:40.101587   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:40.592964   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:40.592991   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:40.592999   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:40.593003   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:40.596416   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:40.597158   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:40.597174   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:40.597181   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:40.597184   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:40.600168   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:41.093073   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:41.093096   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:41.093104   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:41.093108   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:41.097159   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:41.098016   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:41.098030   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:41.098038   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:41.098044   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:41.101210   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:41.593025   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:41.593046   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:41.593054   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:41.593058   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:41.596033   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:41.596628   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:41.596643   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:41.596649   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:41.596654   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:41.599335   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:42.093123   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:42.093144   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:42.093152   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:42.093156   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:42.096299   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:42.097072   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:42.097089   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:42.097097   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:42.097101   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:42.099756   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:42.593968   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:42.593993   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:42.594007   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:42.594013   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:42.597592   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:42.598733   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:42.598753   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:42.598764   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:42.598772   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:42.601528   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:42.602089   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:43.093030   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:43.093050   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:43.093057   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:43.093062   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:43.096531   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:43.097260   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:43.097275   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:43.097281   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:43.097284   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:43.100396   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:43.593122   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:43.593151   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:43.593159   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:43.593163   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:43.596088   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:43.596899   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:43.596913   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:43.596921   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:43.596926   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:43.599977   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:44.093564   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:44.093586   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:44.093594   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:44.093598   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:44.099432   33088 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:06:44.100149   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:44.100173   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:44.100182   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:44.100188   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:44.102393   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:44.593386   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:44.593407   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:44.593414   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:44.593419   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:44.601493   33088 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0610 11:06:44.602264   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:44.602281   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:44.602292   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:44.602300   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:44.607812   33088 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:06:44.608342   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:45.093802   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:45.093822   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:45.093830   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:45.093833   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:45.097316   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:45.097976   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:45.097992   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:45.098001   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:45.098014   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:45.101146   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:45.593839   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:45.593863   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:45.593875   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:45.593881   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:45.597023   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:45.597626   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:45.597642   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:45.597650   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:45.597655   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:45.600911   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:46.093835   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:46.093856   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:46.093863   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:46.093867   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:46.097276   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:46.098005   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:46.098020   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:46.098028   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:46.098033   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:46.101231   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:46.593100   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:46.593123   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:46.593133   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:46.593140   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:46.596308   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:46.597176   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:46.597189   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:46.597195   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:46.597198   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:46.600261   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:47.093150   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:47.093171   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:47.093180   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:47.093184   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:47.097111   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:47.097958   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:47.097974   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:47.097984   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:47.097989   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:47.101236   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:47.101864   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:47.593127   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:47.593149   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:47.593157   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:47.593160   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:47.596223   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:47.596907   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:47.596923   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:47.596930   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:47.596933   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:47.599605   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:48.093535   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:48.093557   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:48.093570   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:48.093574   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:48.097151   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:48.097906   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:48.097919   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:48.097934   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:48.097936   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:48.101074   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:48.593534   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:48.593555   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:48.593562   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:48.593566   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:48.596674   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:48.597467   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:48.597484   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:48.597490   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:48.597495   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:48.600509   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:49.093646   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:49.093667   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:49.093675   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:49.093680   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:49.096895   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:49.097624   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:49.097640   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:49.097650   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:49.097655   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:49.100679   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:49.593733   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:49.593764   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:49.593776   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:49.593785   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:49.597062   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:49.597862   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:49.597875   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:49.597890   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:49.597894   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:49.601052   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:49.601565   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:50.093965   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:50.093988   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:50.093999   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:50.094005   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:50.097503   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:50.098297   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:50.098313   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:50.098320   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:50.098326   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:50.101454   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:50.593138   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:50.593167   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:50.593178   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:50.593185   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:50.597295   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:50.598858   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:50.598876   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:50.598888   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:50.598893   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:50.602330   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:51.093383   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:51.093405   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:51.093415   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:51.093421   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:51.096795   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:51.097736   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:51.097756   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:51.097767   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:51.097773   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:51.100875   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:51.593988   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:51.594008   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:51.594015   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:51.594020   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:51.600749   33088 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0610 11:06:51.601566   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:51.601582   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:51.601591   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:51.601598   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:51.605839   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:51.606794   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:52.093049   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:52.093071   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:52.093080   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:52.093084   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:52.096660   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:52.097795   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:52.097814   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:52.097822   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:52.097825   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:52.100901   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:52.593703   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:52.593728   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:52.593739   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:52.593744   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:52.597218   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:52.597991   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:52.598013   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:52.598020   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:52.598025   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:52.600635   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:53.094006   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:53.094028   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:53.094037   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:53.094042   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:53.097623   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:53.098274   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:53.098289   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:53.098296   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:53.098301   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:53.101313   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:53.593786   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:53.593813   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:53.593831   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:53.593838   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:53.597875   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:53.598678   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:53.598694   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:53.598701   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:53.598705   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:53.601423   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:54.093473   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:54.093505   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:54.093515   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:54.093522   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:54.098520   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:54.099586   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:54.099607   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:54.099616   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:54.099619   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:54.103265   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:54.103826   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:54.593289   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:54.593309   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:54.593317   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:54.593321   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:54.596848   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:54.597934   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:54.597954   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:54.597966   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:54.597973   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:54.600998   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:55.093851   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:55.093869   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:55.093877   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:55.093883   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:55.098445   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:55.099471   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:55.099488   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:55.099498   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:55.099504   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:55.102836   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:55.592981   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:55.593005   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:55.593013   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:55.593017   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:55.597636   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:55.598525   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:55.598540   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:55.598548   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:55.598553   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:55.602636   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:56.093638   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:56.093666   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:56.093678   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:56.093683   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:56.097757   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:56.098411   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:56.098428   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:56.098435   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:56.098439   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:56.101462   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:56.593131   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:56.593153   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:56.593163   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:56.593171   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:56.596409   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:56.596974   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:56.596989   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:56.596997   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:56.597001   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:56.599480   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:56.599984   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:57.093108   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:57.093128   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:57.093136   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:57.093141   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:57.096747   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:57.097502   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:57.097518   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:57.097525   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:57.097529   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:57.100554   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:57.593174   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:57.593197   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:57.593205   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:57.593213   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:57.596709   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:57.597596   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:57.597616   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:57.597623   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:57.597630   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:57.600668   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:58.093913   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:58.093942   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:58.093949   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:58.093953   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:58.097239   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:58.098002   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:58.098018   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:58.098025   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:58.098030   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:58.100761   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:58.593613   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:58.593634   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:58.593642   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:58.593645   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:58.597295   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:58.598126   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:58.598142   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:58.598152   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:58.598156   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:58.600929   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:58.601642   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:06:59.093956   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:59.093976   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:59.093984   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:59.093989   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:59.098050   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:06:59.099156   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:59.099178   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:59.099188   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:59.099193   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:59.102263   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:06:59.593089   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:06:59.593109   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:59.593116   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:59.593121   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:59.595913   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:06:59.596587   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:06:59.596605   33088 round_trippers.go:469] Request Headers:
	I0610 11:06:59.596615   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:06:59.596621   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:06:59.599210   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:00.093074   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:00.093095   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:00.093102   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:00.093106   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:00.096244   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:00.096927   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:00.096966   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:00.096978   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:00.096983   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:00.099803   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:00.593328   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:00.593352   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:00.593360   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:00.593365   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:00.596681   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:00.597548   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:00.597567   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:00.597575   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:00.597579   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:00.601591   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:07:00.602470   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:01.093732   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:01.093756   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:01.093764   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:01.093768   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:01.096884   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:01.097624   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:01.097641   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:01.097650   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:01.097655   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:01.100263   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:01.593096   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:01.593119   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:01.593129   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:01.593133   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:01.596529   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:01.597266   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:01.597281   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:01.597288   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:01.597292   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:01.600024   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:02.093134   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:02.093155   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:02.093162   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:02.093189   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:02.096409   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:02.097212   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:02.097232   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:02.097243   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:02.097250   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:02.099997   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:02.593945   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:02.593970   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:02.593978   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:02.593985   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:02.597115   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:02.597989   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:02.598012   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:02.598019   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:02.598024   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:02.600743   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:03.093948   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:03.093971   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:03.093982   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:03.093988   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:03.097505   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:03.098400   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:03.098419   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:03.098429   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:03.098436   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:03.101344   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:03.101781   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:03.593653   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:03.593673   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:03.593681   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:03.593686   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:03.597355   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:03.598402   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:03.598419   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:03.598427   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:03.598433   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:03.601457   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:04.093130   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:04.093149   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:04.093156   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:04.093160   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:04.097058   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:04.097847   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:04.097866   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:04.097874   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:04.097879   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:04.101388   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:04.593313   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:04.593335   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:04.593343   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:04.593348   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:04.599177   33088 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:07:04.600386   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:04.600403   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:04.600414   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:04.600420   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:04.604627   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:07:05.093628   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:05.093651   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:05.093657   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:05.093661   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:05.096818   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:05.097522   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:05.097539   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:05.097553   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:05.097559   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:05.100038   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:05.592999   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:05.593026   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:05.593038   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:05.593045   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:05.596788   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:05.597762   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:05.597777   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:05.597785   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:05.597790   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:05.601001   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:05.601700   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:06.093856   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:06.093884   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:06.093895   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:06.093899   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:06.098385   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:07:06.099313   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:06.099328   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:06.099335   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:06.099339   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:06.102377   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:06.593750   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:06.593774   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:06.593783   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:06.593788   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:06.596863   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:06.597700   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:06.597715   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:06.597729   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:06.597734   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:06.600982   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:07.093700   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:07.093723   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:07.093734   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:07.093740   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:07.097607   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:07.098609   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:07.098629   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:07.098640   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:07.098645   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:07.101436   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:07.593687   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:07.593713   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:07.593723   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:07.593730   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:07.597022   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:07.597707   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:07.597721   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:07.597729   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:07.597733   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:07.600214   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:08.093371   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:08.093391   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:08.093401   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:08.093406   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:08.096811   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:08.097737   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:08.097751   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:08.097758   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:08.097761   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:08.100519   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:08.101102   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:08.593112   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:08.593132   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:08.593140   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:08.593144   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:08.596546   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:08.597464   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:08.597484   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:08.597496   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:08.597502   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:08.600209   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:09.093144   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:09.093166   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:09.093172   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:09.093175   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:09.096727   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:09.100127   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:09.100145   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:09.100156   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:09.100163   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:09.103444   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:09.593118   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:09.593139   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:09.593147   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:09.593150   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:09.596188   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:09.596905   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:09.596923   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:09.596932   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:09.596937   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:09.599882   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:10.093706   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:10.093727   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:10.093738   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:10.093744   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:10.097102   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:10.098122   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:10.098142   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:10.098154   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:10.098162   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:10.101014   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:10.101398   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:10.593725   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:10.593746   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:10.593755   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:10.593761   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:10.596833   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:10.597611   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:10.597627   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:10.597636   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:10.597647   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:10.600050   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:11.093565   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:11.093588   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:11.093595   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:11.093600   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:11.096673   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:11.097317   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:11.097332   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:11.097338   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:11.097342   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:11.100050   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:11.593910   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:11.593932   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:11.593942   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:11.593948   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:11.596734   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:11.597274   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:11.597292   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:11.597302   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:11.597308   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:11.599917   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:12.093861   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:12.093883   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:12.093890   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:12.093904   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:12.097767   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:12.098535   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:12.098553   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:12.098561   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:12.098566   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:12.101461   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:12.101977   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:12.593150   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:12.593175   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:12.593187   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:12.593193   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:12.596382   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:12.597425   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:12.597445   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:12.597454   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:12.597458   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:12.600090   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:13.093567   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:13.093586   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:13.093594   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:13.093597   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:13.096870   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:13.097887   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:13.097943   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:13.097979   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:13.097984   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:13.101384   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:13.593409   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:13.593438   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:13.593448   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:13.593454   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:13.596769   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:13.597535   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:13.597552   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:13.597557   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:13.597562   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:13.600295   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:14.093157   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:14.093191   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:14.093202   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:14.093210   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:14.097020   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:14.097736   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:14.097755   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:14.097766   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:14.097772   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:14.100879   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:14.593366   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:14.593392   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:14.593401   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:14.593408   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:14.596590   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:14.597355   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:14.597370   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:14.597378   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:14.597382   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:14.600269   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:14.600928   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:15.093105   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:15.093131   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:15.093141   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:15.093145   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:15.096863   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:15.097615   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:15.097634   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:15.097641   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:15.097645   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:15.100695   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:15.593341   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:15.593369   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:15.593380   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:15.593385   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:15.596923   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:15.597651   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:15.597665   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:15.597671   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:15.597674   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:15.600358   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:16.093398   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:16.093443   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:16.093451   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:16.093455   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:16.096347   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:16.097320   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:16.097338   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:16.097348   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:16.097353   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:16.100024   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:16.593805   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:16.593826   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:16.593834   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:16.593838   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:16.597075   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:16.597999   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:16.598013   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:16.598020   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:16.598024   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:16.601281   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:16.601916   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:17.093126   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:17.093145   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:17.093153   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:17.093157   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:17.096663   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:17.097448   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:17.097466   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:17.097473   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:17.097478   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:17.100529   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:17.593431   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:17.593454   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:17.593464   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:17.593468   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:17.596690   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:17.597430   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:17.597448   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:17.597457   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:17.597463   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:17.600164   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:18.093271   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:18.093289   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:18.093298   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:18.093301   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:18.096729   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:18.097675   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:18.097693   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:18.097704   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:18.097711   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:18.100716   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:18.593100   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:18.593120   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:18.593127   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:18.593133   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:18.596077   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:18.596863   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:18.596892   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:18.596899   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:18.596902   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:18.599608   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:19.093419   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:19.093440   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:19.093448   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:19.093453   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:19.097025   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:19.097999   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:19.098013   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:19.098020   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:19.098025   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:19.100648   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:19.101331   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:19.593689   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:19.593715   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:19.593726   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:19.593734   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:19.596808   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:19.597734   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:19.597751   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:19.597758   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:19.597763   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:19.600369   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:20.093144   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:20.093167   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:20.093176   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:20.093180   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:20.096716   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:20.097334   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:20.097352   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:20.097362   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:20.097367   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:20.099999   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:20.593917   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:20.593937   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:20.593945   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:20.593949   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:20.598018   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:07:20.598697   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:20.598712   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:20.598722   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:20.598728   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:20.601983   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:21.093678   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:21.093701   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:21.093711   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:21.093718   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:21.097436   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:21.098190   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:21.098205   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:21.098220   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:21.098224   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:21.102230   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:21.102691   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:21.593158   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:21.593188   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:21.593200   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:21.593207   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:21.596442   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:21.597210   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:21.597226   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:21.597233   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:21.597237   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:21.600519   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:22.093573   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:22.093597   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:22.093604   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:22.093608   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:22.096806   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:22.097472   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:22.097488   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:22.097494   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:22.097498   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:22.100166   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:22.592966   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:22.592993   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:22.593003   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:22.593008   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:22.596722   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:22.597571   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:22.597589   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:22.597597   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:22.597604   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:22.600386   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:23.093779   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:23.093802   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:23.093814   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:23.093821   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:23.097326   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:23.097985   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:23.098004   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:23.098015   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:23.098020   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:23.101778   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:23.593119   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:23.593140   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:23.593157   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:23.593163   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:23.596721   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:23.597548   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:23.597563   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:23.597570   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:23.597573   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:23.600354   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:23.601103   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:24.093138   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:24.093164   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:24.093175   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:24.093179   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:24.098277   33088 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:07:24.099170   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:24.099188   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:24.099198   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:24.099204   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:24.102817   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:24.593880   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:24.593905   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:24.593913   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:24.593919   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:24.596915   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:24.597501   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:24.597514   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:24.597521   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:24.597527   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:24.600335   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:25.093094   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:25.093114   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:25.093122   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:25.093126   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:25.097094   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:25.097835   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:25.097850   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:25.097857   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:25.097861   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:25.101087   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:25.593577   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:25.593601   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:25.593611   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:25.593615   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:25.597098   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:25.597778   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:25.597794   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:25.597801   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:25.597806   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:25.600708   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:25.601223   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:26.093473   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:26.093515   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:26.093529   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:26.093538   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:26.097400   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:26.098146   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:26.098161   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:26.098170   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:26.098285   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:26.101839   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:26.593822   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:26.593848   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:26.593859   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:26.593867   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:26.597155   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:26.597765   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:26.597779   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:26.597787   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:26.597790   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:26.600425   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:27.093119   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:27.093140   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:27.093153   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:27.093157   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:27.097099   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:27.097833   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:27.097854   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:27.097865   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:27.097869   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:27.100911   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:27.593782   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:27.593802   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:27.593810   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:27.593813   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:27.596769   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:27.597593   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:27.597607   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:27.597615   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:27.597620   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:27.600037   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:28.093821   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:28.093838   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:28.093844   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:28.093848   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:28.097392   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:28.098085   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:28.098101   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:28.098111   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:28.098116   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:28.101048   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:28.101527   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:28.593491   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:28.593513   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:28.593520   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:28.593524   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:28.597035   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:28.597856   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:28.597869   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:28.597877   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:28.597881   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:28.600907   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:29.093900   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:29.093930   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:29.093938   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:29.093942   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:29.097626   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:29.098453   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:29.098475   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:29.098485   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:29.098490   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:29.101367   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:29.593134   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:29.593160   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:29.593172   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:29.593178   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:29.597074   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:29.597668   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:29.597683   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:29.597690   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:29.597693   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:29.600797   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:30.093986   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:30.094005   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:30.094012   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:30.094016   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:30.097777   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:30.098760   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:30.098775   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:30.098782   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:30.098786   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:30.101668   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:30.102228   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:30.593648   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:30.593674   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:30.593683   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:30.593690   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:30.596852   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:30.597575   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:30.597591   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:30.597601   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:30.597606   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:30.600485   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:31.093901   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:31.093918   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:31.093926   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:31.093931   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:31.097494   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:31.098225   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:31.098243   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:31.098249   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:31.098259   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:31.101688   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:31.593636   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:31.593661   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:31.593670   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:31.593676   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:31.597267   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:31.597911   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:31.597927   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:31.597934   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:31.597939   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:31.600772   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:32.093720   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:32.093739   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:32.093745   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:32.093751   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:32.097168   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:32.097756   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:32.097771   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:32.097778   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:32.097782   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:32.100266   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:32.593878   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:32.593903   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:32.593915   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:32.593919   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:32.597705   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:32.598469   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:32.598483   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:32.598491   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:32.598496   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:32.602414   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:32.603117   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:33.093011   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:33.093032   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:33.093040   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:33.093044   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:33.096224   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:33.097263   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:33.097278   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:33.097286   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:33.097292   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:33.100745   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:33.593688   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:33.593711   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:33.593720   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:33.593725   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:33.597230   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:33.597921   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:33.597935   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:33.597941   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:33.597945   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:33.600586   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:34.093267   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:34.093296   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:34.093305   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:34.093311   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:34.100814   33088 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0610 11:07:34.101726   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:34.101745   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:34.101756   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:34.101764   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:34.105227   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:34.594015   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:34.594042   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:34.594051   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:34.594055   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:34.598023   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:34.598624   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:34.598639   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:34.598646   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:34.598650   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:34.601630   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:35.093139   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:35.093164   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:35.093172   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:35.093175   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:35.096825   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:35.097598   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:35.097615   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:35.097623   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:35.097626   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:35.100572   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:35.101605   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:35.593888   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:35.593913   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:35.593924   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:35.593930   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:35.597373   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:35.598025   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:35.598037   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:35.598044   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:35.598047   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:35.601188   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:36.092920   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:36.092942   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:36.092970   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:36.092976   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:36.096447   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:36.097134   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:36.097149   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:36.097156   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:36.097159   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:36.100132   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:36.593121   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:36.593171   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:36.593189   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:36.593195   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:36.596384   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:36.597095   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:36.597110   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:36.597117   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:36.597123   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:36.599779   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:37.093833   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:37.093863   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:37.093874   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:37.093880   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:37.097267   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:37.097942   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:37.097958   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:37.097966   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:37.097972   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:37.101006   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:37.101766   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:37.593707   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:37.593743   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:37.593755   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:37.593763   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:37.596729   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:37.597594   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:37.597611   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:37.597622   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:37.597629   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:37.600230   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:38.093790   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:38.093814   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:38.093826   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:38.093831   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:38.097591   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:38.098459   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:38.098478   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:38.098489   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:38.098494   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:38.101256   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:38.594034   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:38.594056   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:38.594064   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:38.594068   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:38.597286   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:38.598046   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:38.598064   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:38.598072   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:38.598076   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:38.600975   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:39.093559   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:39.093593   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:39.093605   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:39.093613   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:39.096855   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:39.097721   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:39.097737   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:39.097743   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:39.097747   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:39.100816   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:39.593724   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:39.593743   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:39.593751   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:39.593755   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:39.596942   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:39.597685   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:39.597700   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:39.597707   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:39.597712   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:39.600474   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:39.600911   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:40.093112   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:40.093134   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:40.093140   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:40.093143   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:40.096301   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:40.097189   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:40.097208   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:40.097219   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:40.097226   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:40.100034   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:40.593025   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:40.593047   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:40.593058   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:40.593064   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:40.596297   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:40.597002   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:40.597018   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:40.597027   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:40.597033   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:40.599663   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:41.093159   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:41.093181   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:41.093194   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:41.093200   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:41.096585   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:41.097355   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:41.097370   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:41.097377   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:41.097381   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:41.099924   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:41.592999   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:41.593025   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:41.593035   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:41.593044   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:41.596978   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:41.598046   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:41.598058   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:41.598065   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:41.598069   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:41.601574   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:41.602463   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:42.093828   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:42.093851   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:42.093862   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:42.093867   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:42.097117   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:42.097920   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:42.097937   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:42.097944   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:42.097948   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:42.100801   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:42.593386   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:42.593411   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:42.593420   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:42.593424   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:42.597408   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:42.598268   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:42.598285   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:42.598291   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:42.598295   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:42.601431   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:43.093004   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:43.093027   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:43.093034   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:43.093039   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:43.096202   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:43.097344   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:43.097365   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:43.097376   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:43.097381   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:43.100630   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:43.593426   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:43.593447   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:43.593454   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:43.593459   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:43.596463   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:43.597185   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:43.597198   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:43.597205   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:43.597210   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:43.599848   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:44.093813   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:44.093840   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:44.093851   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:44.093859   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:44.097778   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:44.098528   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:44.098547   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:44.098568   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:44.098573   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:44.102904   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:07:44.103595   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:44.593110   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:44.593132   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:44.593140   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:44.593143   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:44.596026   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:44.596730   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:44.596748   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:44.596756   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:44.596760   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:44.599292   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:45.093959   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:45.093993   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:45.094004   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:45.094013   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:45.098673   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:07:45.099633   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:45.099651   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:45.099659   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:45.099665   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:45.102527   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:45.593124   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:45.593146   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:45.593153   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:45.593158   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:45.596355   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:45.597188   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:45.597206   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:45.597217   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:45.597222   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:45.600109   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:46.093382   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:46.093402   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:46.093410   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:46.093415   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:46.098429   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:07:46.099550   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:46.099564   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:46.099572   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:46.099578   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:46.102739   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:46.593628   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:46.593648   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:46.593657   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:46.593660   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:46.597987   33088 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0610 11:07:46.598697   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:46.598712   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:46.598719   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:46.598722   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:46.602125   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:46.603147   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:47.093141   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:47.093165   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:47.093171   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:47.093176   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:47.096756   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:47.097790   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:47.097810   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:47.097821   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:47.097825   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:47.100394   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:47.593112   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:47.593133   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:47.593141   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:47.593145   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:47.596468   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:47.597097   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:47.597113   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:47.597120   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:47.597125   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:47.600195   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:48.093678   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:48.093698   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:48.093706   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:48.093710   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:48.097085   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:48.097974   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:48.097991   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:48.097998   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:48.098004   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:48.100881   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:48.593601   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:48.593629   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:48.593638   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:48.593650   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:48.597565   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:48.598299   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:48.598314   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:48.598320   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:48.598322   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:48.601319   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:49.093118   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:49.093137   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:49.093145   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:49.093151   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:49.096387   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:49.097040   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:49.097056   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:49.097062   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:49.097067   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:49.099725   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:49.100267   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"
	I0610 11:07:49.593249   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:49.593277   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:49.593290   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:49.593296   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:49.598531   33088 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0610 11:07:49.599181   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:49.599197   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:49.599205   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:49.599208   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:49.602046   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:50.093055   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:50.093076   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:50.093085   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:50.093090   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:50.096660   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:50.097395   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:50.097410   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:50.097418   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:50.097423   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:50.100101   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:50.593066   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:50.593086   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:50.593094   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:50.593097   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:50.596322   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:50.597042   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:50.597060   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:50.597066   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:50.597071   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:50.600024   33088 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0610 11:07:51.093103   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-545cf
	I0610 11:07:51.093125   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:51.093133   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:51.093137   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:51.096536   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:51.097380   33088 round_trippers.go:463] GET https://192.168.39.208:8443/api/v1/nodes/ha-565925
	I0610 11:07:51.097397   33088 round_trippers.go:469] Request Headers:
	I0610 11:07:51.097404   33088 round_trippers.go:473]     Accept: application/json, */*
	I0610 11:07:51.097410   33088 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0610 11:07:51.100566   33088 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0610 11:07:51.101111   33088 pod_ready.go:102] pod "coredns-7db6d8ff4d-545cf" in "kube-system" namespace has status "Ready":"False"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 node add -p ha-565925 --control-plane -v=7 --alsologtostderr" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-565925 -n ha-565925
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-565925 logs -n 25: (1.775263643s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m04 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp testdata/cp-test.txt                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1107448961/001/cp-test_ha-565925-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925:/home/docker/cp-test_ha-565925-m04_ha-565925.txt                       |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925 sudo cat                                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925.txt                                 |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m02:/home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m02 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m03:/home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | ha-565925-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-565925 ssh -n ha-565925-m03 sudo cat                                          | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC | 10 Jun 24 10:42 UTC |
	|         | /home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-565925 node stop m02 -v=7                                                     | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:42 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-565925 node start m02 -v=7                                                    | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565925 -v=7                                                           | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-565925 -v=7                                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-565925 --wait=true -v=7                                                    | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:47 UTC | 10 Jun 24 10:51 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-565925                                                                | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:51 UTC |                     |
	| node    | ha-565925 node delete m03 -v=7                                                   | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:52 UTC | 10 Jun 24 10:52 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-565925 stop -v=7                                                              | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:52 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-565925 --wait=true                                                         | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 10:54 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	| node    | add -p ha-565925                                                                 | ha-565925 | jenkins | v1.33.1 | 10 Jun 24 11:05 UTC |                     |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:54:41
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:54:41.118006   30524 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:54:41.118313   30524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:54:41.118326   30524 out.go:304] Setting ErrFile to fd 2...
	I0610 10:54:41.118331   30524 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:54:41.118586   30524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:54:41.119100   30524 out.go:298] Setting JSON to false
	I0610 10:54:41.120030   30524 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2222,"bootTime":1718014659,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:54:41.120088   30524 start.go:139] virtualization: kvm guest
	I0610 10:54:41.122252   30524 out.go:177] * [ha-565925] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:54:41.123728   30524 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:54:41.123731   30524 notify.go:220] Checking for updates...
	I0610 10:54:41.125175   30524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:54:41.126614   30524 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:54:41.128031   30524 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:54:41.129312   30524 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 10:54:41.130778   30524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:54:41.132606   30524 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:54:41.133157   30524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:41.133241   30524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:41.148356   30524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0610 10:54:41.148855   30524 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:41.149465   30524 main.go:141] libmachine: Using API Version  1
	I0610 10:54:41.149493   30524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:41.149856   30524 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:41.150063   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.150360   30524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:54:41.150685   30524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:41.150725   30524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:41.166173   30524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46089
	I0610 10:54:41.166610   30524 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:41.167143   30524 main.go:141] libmachine: Using API Version  1
	I0610 10:54:41.167177   30524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:41.167585   30524 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:41.167745   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.204423   30524 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 10:54:41.205821   30524 start.go:297] selected driver: kvm2
	I0610 10:54:41.205839   30524 start.go:901] validating driver "kvm2" against &{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:54:41.206044   30524 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:54:41.206508   30524 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:54:41.206610   30524 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 10:54:41.221453   30524 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 10:54:41.222080   30524 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 10:54:41.222117   30524 cni.go:84] Creating CNI manager for ""
	I0610 10:54:41.222122   30524 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 10:54:41.222169   30524 start.go:340] cluster config:
	{Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:54:41.222302   30524 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:54:41.224822   30524 out.go:177] * Starting "ha-565925" primary control-plane node in "ha-565925" cluster
	I0610 10:54:41.226052   30524 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:54:41.226097   30524 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 10:54:41.226110   30524 cache.go:56] Caching tarball of preloaded images
	I0610 10:54:41.226211   30524 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 10:54:41.226230   30524 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 10:54:41.226375   30524 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/config.json ...
	I0610 10:54:41.226626   30524 start.go:360] acquireMachinesLock for ha-565925: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 10:54:41.226690   30524 start.go:364] duration metric: took 37.509µs to acquireMachinesLock for "ha-565925"
	I0610 10:54:41.226705   30524 start.go:96] Skipping create...Using existing machine configuration
	I0610 10:54:41.226712   30524 fix.go:54] fixHost starting: 
	I0610 10:54:41.227120   30524 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:54:41.227161   30524 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:54:41.242583   30524 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0610 10:54:41.242978   30524 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:54:41.243450   30524 main.go:141] libmachine: Using API Version  1
	I0610 10:54:41.243475   30524 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:54:41.243758   30524 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:54:41.243949   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.244094   30524 main.go:141] libmachine: (ha-565925) Calling .GetState
	I0610 10:54:41.245612   30524 fix.go:112] recreateIfNeeded on ha-565925: state=Running err=<nil>
	W0610 10:54:41.245647   30524 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 10:54:41.247531   30524 out.go:177] * Updating the running kvm2 "ha-565925" VM ...
	I0610 10:54:41.248547   30524 machine.go:94] provisionDockerMachine start ...
	I0610 10:54:41.248566   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:54:41.248752   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.251686   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.252215   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.252246   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.252393   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.252533   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.252678   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.252823   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.253014   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.253203   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.253216   30524 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 10:54:41.373744   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:54:41.373773   30524 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:54:41.374028   30524 buildroot.go:166] provisioning hostname "ha-565925"
	I0610 10:54:41.374051   30524 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:54:41.374251   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.376909   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.377435   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.377469   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.377677   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.377868   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.378048   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.378178   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.378464   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.378656   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.378674   30524 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-565925 && echo "ha-565925" | sudo tee /etc/hostname
	I0610 10:54:41.508245   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-565925
	
	I0610 10:54:41.508284   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.511418   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.511816   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.511845   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.512073   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.512267   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.512447   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.512583   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.512730   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.512872   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.512888   30524 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-565925' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-565925/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-565925' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 10:54:41.622194   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 10:54:41.622225   30524 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 10:54:41.622254   30524 buildroot.go:174] setting up certificates
	I0610 10:54:41.622265   30524 provision.go:84] configureAuth start
	I0610 10:54:41.622280   30524 main.go:141] libmachine: (ha-565925) Calling .GetMachineName
	I0610 10:54:41.622553   30524 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:54:41.625606   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.626048   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.626080   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.626269   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.628920   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.629378   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.629408   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.629544   30524 provision.go:143] copyHostCerts
	I0610 10:54:41.629575   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:54:41.629647   30524 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 10:54:41.629661   30524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 10:54:41.629736   30524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 10:54:41.629825   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:54:41.629850   30524 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 10:54:41.629856   30524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 10:54:41.629892   30524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 10:54:41.629951   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:54:41.629974   30524 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 10:54:41.629983   30524 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 10:54:41.630016   30524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 10:54:41.630079   30524 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.ha-565925 san=[127.0.0.1 192.168.39.208 ha-565925 localhost minikube]
	I0610 10:54:41.796354   30524 provision.go:177] copyRemoteCerts
	I0610 10:54:41.796408   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 10:54:41.796427   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.799182   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.799580   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.799612   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.799774   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.799964   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.800110   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.800269   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:54:41.886743   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 10:54:41.886807   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 10:54:41.911979   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 10:54:41.912068   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0610 10:54:41.934784   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 10:54:41.934862   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 10:54:41.965258   30524 provision.go:87] duration metric: took 342.978909ms to configureAuth
	I0610 10:54:41.965284   30524 buildroot.go:189] setting minikube options for container-runtime
	I0610 10:54:41.965557   30524 config.go:182] Loaded profile config "ha-565925": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:54:41.965650   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:54:41.968652   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.969098   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:54:41.969127   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:54:41.969313   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:54:41.969506   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.969658   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:54:41.969782   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:54:41.969945   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:54:41.970089   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:54:41.970105   30524 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 10:56:16.545583   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 10:56:16.545610   30524 machine.go:97] duration metric: took 1m35.297049726s to provisionDockerMachine
	I0610 10:56:16.545622   30524 start.go:293] postStartSetup for "ha-565925" (driver="kvm2")
	I0610 10:56:16.545634   30524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 10:56:16.545648   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.545946   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 10:56:16.545974   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.549506   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.549888   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.549917   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.550060   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.550291   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.550434   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.550585   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:56:16.643222   30524 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 10:56:16.647268   30524 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 10:56:16.647298   30524 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 10:56:16.647386   30524 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 10:56:16.647463   30524 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 10:56:16.647472   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 10:56:16.647547   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 10:56:16.656115   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:56:16.678422   30524 start.go:296] duration metric: took 132.785526ms for postStartSetup
	I0610 10:56:16.678466   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.678740   30524 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0610 10:56:16.678764   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.681456   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.681793   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.681818   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.682024   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.682194   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.682351   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.682480   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	W0610 10:56:16.766654   30524 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0610 10:56:16.766682   30524 fix.go:56] duration metric: took 1m35.539971634s for fixHost
	I0610 10:56:16.766702   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.769598   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.769916   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.769941   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.770107   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.770306   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.770485   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.770642   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.770836   30524 main.go:141] libmachine: Using SSH client type: native
	I0610 10:56:16.771025   30524 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.208 22 <nil> <nil>}
	I0610 10:56:16.771036   30524 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 10:56:16.881672   30524 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718016976.851445963
	
	I0610 10:56:16.881699   30524 fix.go:216] guest clock: 1718016976.851445963
	I0610 10:56:16.881706   30524 fix.go:229] Guest: 2024-06-10 10:56:16.851445963 +0000 UTC Remote: 2024-06-10 10:56:16.766689612 +0000 UTC m=+95.683159524 (delta=84.756351ms)
	I0610 10:56:16.881728   30524 fix.go:200] guest clock delta is within tolerance: 84.756351ms
	I0610 10:56:16.881733   30524 start.go:83] releasing machines lock for "ha-565925", held for 1m35.655035273s
	I0610 10:56:16.881753   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.882001   30524 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:56:16.884407   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.884788   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.884813   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.885036   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.885622   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.885800   30524 main.go:141] libmachine: (ha-565925) Calling .DriverName
	I0610 10:56:16.885881   30524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 10:56:16.885923   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.885974   30524 ssh_runner.go:195] Run: cat /version.json
	I0610 10:56:16.885997   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHHostname
	I0610 10:56:16.888482   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.888507   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.888849   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.888877   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.888905   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:56:16.888921   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:56:16.889003   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.889176   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.889183   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHPort
	I0610 10:56:16.889379   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHKeyPath
	I0610 10:56:16.889382   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.889551   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:56:16.889565   30524 main.go:141] libmachine: (ha-565925) Calling .GetSSHUsername
	I0610 10:56:16.889718   30524 sshutil.go:53] new ssh client: &{IP:192.168.39.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/ha-565925/id_rsa Username:docker}
	I0610 10:56:17.011118   30524 ssh_runner.go:195] Run: systemctl --version
	I0610 10:56:17.017131   30524 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 10:56:17.216081   30524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 10:56:17.223769   30524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 10:56:17.223850   30524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 10:56:17.233465   30524 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0610 10:56:17.233483   30524 start.go:494] detecting cgroup driver to use...
	I0610 10:56:17.233543   30524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 10:56:17.249240   30524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 10:56:17.272860   30524 docker.go:217] disabling cri-docker service (if available) ...
	I0610 10:56:17.272920   30524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 10:56:17.286910   30524 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 10:56:17.300438   30524 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 10:56:17.458186   30524 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 10:56:17.614805   30524 docker.go:233] disabling docker service ...
	I0610 10:56:17.614876   30524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 10:56:17.632334   30524 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 10:56:17.647026   30524 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 10:56:17.806618   30524 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 10:56:17.960595   30524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 10:56:17.976431   30524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 10:56:17.994520   30524 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 10:56:17.994572   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.005055   30524 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 10:56:18.005111   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.015347   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.025972   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.035997   30524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 10:56:18.046374   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.056748   30524 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.067550   30524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 10:56:18.079015   30524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 10:56:18.089287   30524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 10:56:18.098589   30524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:56:18.248797   30524 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 10:57:52.551485   30524 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m34.302647129s)
	I0610 10:57:52.551522   30524 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 10:57:52.551583   30524 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 10:57:52.557137   30524 start.go:562] Will wait 60s for crictl version
	I0610 10:57:52.557197   30524 ssh_runner.go:195] Run: which crictl
	I0610 10:57:52.560833   30524 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 10:57:52.602747   30524 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 10:57:52.602812   30524 ssh_runner.go:195] Run: crio --version
	I0610 10:57:52.632305   30524 ssh_runner.go:195] Run: crio --version
	I0610 10:57:52.663707   30524 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 10:57:52.664992   30524 main.go:141] libmachine: (ha-565925) Calling .GetIP
	I0610 10:57:52.667804   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:57:52.668260   30524 main.go:141] libmachine: (ha-565925) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:d6:ef", ip: ""} in network mk-ha-565925: {Iface:virbr1 ExpiryTime:2024-06-10 11:38:04 +0000 UTC Type:0 Mac:52:54:00:d3:d6:ef Iaid: IPaddr:192.168.39.208 Prefix:24 Hostname:ha-565925 Clientid:01:52:54:00:d3:d6:ef}
	I0610 10:57:52.668300   30524 main.go:141] libmachine: (ha-565925) DBG | domain ha-565925 has defined IP address 192.168.39.208 and MAC address 52:54:00:d3:d6:ef in network mk-ha-565925
	I0610 10:57:52.668509   30524 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 10:57:52.673571   30524 kubeadm.go:877] updating cluster {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 10:57:52.673697   30524 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:57:52.673733   30524 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:57:52.722568   30524 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:57:52.722591   30524 crio.go:433] Images already preloaded, skipping extraction
	I0610 10:57:52.722634   30524 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 10:57:52.758588   30524 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 10:57:52.758613   30524 cache_images.go:84] Images are preloaded, skipping loading
	I0610 10:57:52.758623   30524 kubeadm.go:928] updating node { 192.168.39.208 8443 v1.30.1 crio true true} ...
	I0610 10:57:52.758735   30524 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-565925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 10:57:52.758813   30524 ssh_runner.go:195] Run: crio config
	I0610 10:57:52.807160   30524 cni.go:84] Creating CNI manager for ""
	I0610 10:57:52.807180   30524 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 10:57:52.807188   30524 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 10:57:52.807207   30524 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.208 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-565925 NodeName:ha-565925 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 10:57:52.807474   30524 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-565925"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 10:57:52.807497   30524 kube-vip.go:115] generating kube-vip config ...
	I0610 10:57:52.807538   30524 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0610 10:57:52.821166   30524 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0610 10:57:52.821266   30524 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0610 10:57:52.821314   30524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 10:57:52.830928   30524 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 10:57:52.831003   30524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0610 10:57:52.840191   30524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0610 10:57:52.856314   30524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 10:57:52.873456   30524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0610 10:57:52.889534   30524 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0610 10:57:52.905592   30524 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0610 10:57:52.909983   30524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 10:57:53.084746   30524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 10:57:53.099672   30524 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925 for IP: 192.168.39.208
	I0610 10:57:53.099692   30524 certs.go:194] generating shared ca certs ...
	I0610 10:57:53.099705   30524 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:57:53.099868   30524 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 10:57:53.099914   30524 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 10:57:53.099929   30524 certs.go:256] generating profile certs ...
	I0610 10:57:53.100014   30524 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/client.key
	I0610 10:57:53.100051   30524 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615
	I0610 10:57:53.100070   30524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.208 192.168.39.230 192.168.39.254]
	I0610 10:57:53.273760   30524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615 ...
	I0610 10:57:53.273791   30524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615: {Name:mk79115d7de4bf61379a9c75b6c64a9b4dc80bca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:57:53.274014   30524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615 ...
	I0610 10:57:53.274033   30524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615: {Name:mk4d8a4986706bc557549784e21d622fc4d3ed07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:57:53.274155   30524 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt.17088615 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt
	I0610 10:57:53.274312   30524 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key.17088615 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key
	I0610 10:57:53.274447   30524 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key
	I0610 10:57:53.274463   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 10:57:53.274477   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 10:57:53.274492   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 10:57:53.274507   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 10:57:53.274521   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 10:57:53.274536   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 10:57:53.274550   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 10:57:53.274564   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 10:57:53.274613   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 10:57:53.274643   30524 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 10:57:53.274656   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 10:57:53.274681   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 10:57:53.274704   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 10:57:53.274728   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 10:57:53.274768   30524 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 10:57:53.274798   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.274814   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.274829   30524 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.275331   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 10:57:53.300829   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 10:57:53.324567   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 10:57:53.350089   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 10:57:53.374999   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0610 10:57:53.397824   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 10:57:53.421021   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 10:57:53.446630   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/ha-565925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 10:57:53.470414   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 10:57:53.493000   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 10:57:53.515339   30524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 10:57:53.537877   30524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 10:57:53.553878   30524 ssh_runner.go:195] Run: openssl version
	I0610 10:57:53.559722   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 10:57:53.569566   30524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.574152   30524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.574204   30524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 10:57:53.579638   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 10:57:53.588481   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 10:57:53.598838   30524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.603320   30524 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.603377   30524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 10:57:53.608835   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 10:57:53.617653   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 10:57:53.628558   30524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.633075   30524 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.633128   30524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 10:57:53.638735   30524 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 10:57:53.648052   30524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 10:57:53.652519   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 10:57:53.658463   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 10:57:53.664313   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 10:57:53.670045   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 10:57:53.676237   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 10:57:53.681823   30524 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 10:57:53.687578   30524 kubeadm.go:391] StartCluster: {Name:ha-565925 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-565925 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.208 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.230 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.229 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:57:53.687693   30524 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 10:57:53.687749   30524 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 10:57:53.733352   30524 cri.go:89] found id: "30454a419886c40b480f6310ea93590cfd5ce458d59101eb2f1d8b18ccc00fe3"
	I0610 10:57:53.733379   30524 cri.go:89] found id: "3f42a3959512141305a423acbd9e3651a0d52b5082c682b258cd4164bf4c8e22"
	I0610 10:57:53.733385   30524 cri.go:89] found id: "895531b30d08486c2c45c81d3c4061852a40480faff500bc98d063e08c3908f2"
	I0610 10:57:53.733390   30524 cri.go:89] found id: "ba05d1801bbb55716b014287ef6d2a8e0065c2e60eb0da2be941e285cce4111d"
	I0610 10:57:53.733395   30524 cri.go:89] found id: "18be5875f033dc26e05de432e9aafd5da62427c82b8a7148b7a2315e67a331fa"
	I0610 10:57:53.733400   30524 cri.go:89] found id: "031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b"
	I0610 10:57:53.733403   30524 cri.go:89] found id: "6d2fc31bedad8ed60279933edc3fdf9c744a1606b0249fb4358d66b5c7884b47"
	I0610 10:57:53.733407   30524 cri.go:89] found id: "0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b04258e36921b56cf5"
	I0610 10:57:53.733409   30524 cri.go:89] found id: "d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566"
	I0610 10:57:53.733415   30524 cri.go:89] found id: "ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780"
	I0610 10:57:53.733418   30524 cri.go:89] found id: "d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1"
	I0610 10:57:53.733420   30524 cri.go:89] found id: "a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5"
	I0610 10:57:53.733422   30524 cri.go:89] found id: "10ce07d12f096d630f9093eb4eeb3bcfb435174cad5058aad05bd4c955206bef"
	I0610 10:57:53.733425   30524 cri.go:89] found id: "a35ae66a1bbe396e6ff9d769def35e984902ed42b5989274e34cad8f90ba2627"
	I0610 10:57:53.733430   30524 cri.go:89] found id: "1f037e4537f6182747a78c8398e388d1cd43fe536754d6d8a50f52b8689b3163"
	I0610 10:57:53.733432   30524 cri.go:89] found id: "534a412f3a743952b0fba0175071fb9a47fd04169c4014721c7e5c6931d7e62f"
	I0610 10:57:53.733435   30524 cri.go:89] found id: "fa492285e9f663d2b76c575594b5ba550e97e7861c891c05022d7e2ac1a78f91"
	I0610 10:57:53.733439   30524 cri.go:89] found id: "538119110afb1122b2b9d43d0a15441ed76f351b1221ca19caa981d3aab0eb82"
	I0610 10:57:53.733442   30524 cri.go:89] found id: "15b93b06d8221ab57065f3fdffaa93240b54fcaea6359ee510147933ebc24cdd"
	I0610 10:57:53.733445   30524 cri.go:89] found id: ""
	I0610 10:57:53.733492   30524 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:51.999616021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b35cc288-6207-47b9-8c08-fa1fb18b4d9a name=/runtime.v1.RuntimeService/Version
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.007455378Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db90527d-4975-42da-98a6-24c208453edd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.008352191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718017672008326730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db90527d-4975-42da-98a6-24c208453edd name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.009396951Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=887025f9-49db-4490-8186-781dbd86d767 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.009503075Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=887025f9-49db-4490-8186-781dbd86d767 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.010382584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc71731db34e54cc482f777258e552da4eb09b06301d22a96d4b5b7a1c09553a,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718017278826933875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7adba8b85d829b73a4b55001ec3a5549587e6b92cba7280bc5042eb1d764a2,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718017260825295406,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307518a974a9d81484017b6def4bcb35734f73f49643e1e3b41a2e1bb4d72619,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718017258826262478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667bd353fdda8be94426be8fb00d6739c3209268ea60a077feb6d24afc39af7,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718017242826163325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596f78d1f9f1a08bb0774a454ecd00ac562ae38017ea807582d9fe153c3ae83,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017149836607469,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5196758907fd1be55dfb4db8fdf71169c2226b54a2688835b92147fbaf8b52,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017149822916335,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ab46f3546bcfed28150552839b3cc283c32cb309a33ebb0ea67459079f5eb,PodSandboxId:20e1ade57d2542a1c7331c6dcfc2127d5be744e132190337c981b0fc4bed8da4,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718017112116718602,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.
kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9700dc3bf19471a12df22302b585640a8bba48b9c13b6f07e34797964a72bf9,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718017078747702884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:dac5139d75fe4e3d41205aa1803b8091a016d26e34b621f388426b4f28c9788f,PodSandboxId:16504243eb24ec6452badeef3694a359b10b881b6cbee11932acfb706fa05569,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718017079128195775,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a6bfc115b83fe8e36c67f3ce6d994b1cce135626a1c3a20165012107bebf06ca,PodSandboxId:868f5b2fa2a9647cf0d9f242ebbb87f7167e73566a4cfd589ec6112e3a3d61c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718017079118362076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b445c2d316f603033fc8e810
ba508bba9398ff7de68e41b686958ee2cb8fcfd,PodSandboxId:b49a011721881d8ce465640daa30b2d69b6cae387aca077c70daa38e2c3cc389,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718017078925256217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbcde3714e14329d6337427e054bc34da36c1a1a94a6aad9cc9ae1b179eebdd,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718017078902111617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c407eac5c82e6b20991f6cfe3e6f662eb2f7cbcc8a79638d675d463c8120dd,PodSandboxId:cea0105c4b4e7225b5371932b06a504c5cbf20c43d948908687c1708dd82410d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718017078803503346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d162210bec339f31d4b24d962ad510c8c5712d5173ea2a82ebe50e463194bf12,PodSandboxId:dd0f08cb4bc7915dd3c4046a654abb28b7711f688615e361aaf3b5a874d439d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718017078580667689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016607125233498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1718016585794533885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b392205cc4da
349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718016573870537430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b0
4258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573906904776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718016573751979920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718016573784490891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573866917347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=887025f9-49db-4490-8186-781dbd86d767 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.048611874Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0b499e0-9b29-462d-9797-64451b9cacd2 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.048733242Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0b499e0-9b29-462d-9797-64451b9cacd2 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.049651939Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4320a646-f211-4341-a29b-b1a9902e162c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.050295468Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718017672050274164,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4320a646-f211-4341-a29b-b1a9902e162c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.050878166Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6f4325c0-957b-4b2f-aa5f-049ceb532489 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.051006267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6f4325c0-957b-4b2f-aa5f-049ceb532489 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.051448569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc71731db34e54cc482f777258e552da4eb09b06301d22a96d4b5b7a1c09553a,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718017278826933875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7adba8b85d829b73a4b55001ec3a5549587e6b92cba7280bc5042eb1d764a2,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718017260825295406,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307518a974a9d81484017b6def4bcb35734f73f49643e1e3b41a2e1bb4d72619,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718017258826262478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667bd353fdda8be94426be8fb00d6739c3209268ea60a077feb6d24afc39af7,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718017242826163325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596f78d1f9f1a08bb0774a454ecd00ac562ae38017ea807582d9fe153c3ae83,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017149836607469,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5196758907fd1be55dfb4db8fdf71169c2226b54a2688835b92147fbaf8b52,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017149822916335,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ab46f3546bcfed28150552839b3cc283c32cb309a33ebb0ea67459079f5eb,PodSandboxId:20e1ade57d2542a1c7331c6dcfc2127d5be744e132190337c981b0fc4bed8da4,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718017112116718602,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.
kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9700dc3bf19471a12df22302b585640a8bba48b9c13b6f07e34797964a72bf9,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718017078747702884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:dac5139d75fe4e3d41205aa1803b8091a016d26e34b621f388426b4f28c9788f,PodSandboxId:16504243eb24ec6452badeef3694a359b10b881b6cbee11932acfb706fa05569,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718017079128195775,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a6bfc115b83fe8e36c67f3ce6d994b1cce135626a1c3a20165012107bebf06ca,PodSandboxId:868f5b2fa2a9647cf0d9f242ebbb87f7167e73566a4cfd589ec6112e3a3d61c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718017079118362076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b445c2d316f603033fc8e810
ba508bba9398ff7de68e41b686958ee2cb8fcfd,PodSandboxId:b49a011721881d8ce465640daa30b2d69b6cae387aca077c70daa38e2c3cc389,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718017078925256217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbcde3714e14329d6337427e054bc34da36c1a1a94a6aad9cc9ae1b179eebdd,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718017078902111617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c407eac5c82e6b20991f6cfe3e6f662eb2f7cbcc8a79638d675d463c8120dd,PodSandboxId:cea0105c4b4e7225b5371932b06a504c5cbf20c43d948908687c1708dd82410d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718017078803503346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d162210bec339f31d4b24d962ad510c8c5712d5173ea2a82ebe50e463194bf12,PodSandboxId:dd0f08cb4bc7915dd3c4046a654abb28b7711f688615e361aaf3b5a874d439d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718017078580667689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016607125233498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1718016585794533885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b392205cc4da
349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718016573870537430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b0
4258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573906904776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718016573751979920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718016573784490891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573866917347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6f4325c0-957b-4b2f-aa5f-049ceb532489 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.092345547Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=4c4e015b-8558-4647-a793-cb7235bb3458 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.092894642Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:20e1ade57d2542a1c7331c6dcfc2127d5be744e132190337c981b0fc4bed8da4,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-6wmkd,Uid:f8a1e0dc-e561-4def-9787-c5d0eda08fda,Namespace:default,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718017111990454778,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:41:21.254050246Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b49a011721881d8ce465640daa30b2d69b6cae387aca077c70daa38e2c3cc389,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wn6nh,Uid:9e47f047-e98b-48c8-8a33-8f790a3e8017,Namespace:kube-system,Attempt:2,},State:
SANDBOX_READY,CreatedAt:1718017078345599958,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:49.589282044Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-565925,Uid:d811c4cb2aa091785cd31dce6f7bed4f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718017078312106337,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,tier: control-plane,},Annotations:map[string]string{kubernetes
.io/config.hash: d811c4cb2aa091785cd31dce6f7bed4f,kubernetes.io/config.seen: 2024-06-10T10:38:30.793996164Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&PodSandboxMetadata{Name:kindnet-rnn59,Uid:9141e131-eebc-4f51-8b55-46ff649ffaee,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718017078303599007,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:44.065979711Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:16504243eb24ec6452badeef3694a359b10b881b6cbee11932acfb706fa05569,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-565925,Uid:5b7f7bf516814f2c5dbe0fbc6d
aa3a18,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718017078301981922,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{kubernetes.io/config.hash: 5b7f7bf516814f2c5dbe0fbc6daa3a18,kubernetes.io/config.seen: 2024-06-10T10:49:32.651608997Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cea0105c4b4e7225b5371932b06a504c5cbf20c43d948908687c1708dd82410d,Metadata:&PodSandboxMetadata{Name:etcd-ha-565925,Uid:24c16c67f513f809f76a7bbd749e01f3,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718017078284997571,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernete
s.io/etcd.advertise-client-urls: https://192.168.39.208:2379,kubernetes.io/config.hash: 24c16c67f513f809f76a7bbd749e01f3,kubernetes.io/config.seen: 2024-06-10T10:38:30.793999653Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:868f5b2fa2a9647cf0d9f242ebbb87f7167e73566a4cfd589ec6112e3a3d61c2,Metadata:&PodSandboxMetadata{Name:kube-proxy-wdjhn,Uid:da3ac11b-0906-4695-80b1-f3f4f1a34de1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718017078264353981,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:44.034881743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&PodSandboxMetadat
a{Name:storage-provisioner,Uid:0ca60a36-c445-4520-b857-7df39dfed848,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718017078263524291,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"ho
stNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-10T10:38:49.603428236Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dd0f08cb4bc7915dd3c4046a654abb28b7711f688615e361aaf3b5a874d439d0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-565925,Uid:0160bc841c85a002ebb521cea7065bc7,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718017078252849769,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0160bc841c85a002ebb521cea7065bc7,kubernetes.io/config.seen: 2024-06-10T10:38:30.793997530Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:555188fecd0274a950ee2c75d9
6e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-565925,Uid:12d1dab5f9db3366c19df7ea45438b14,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1718017078220631206,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.208:8443,kubernetes.io/config.hash: 12d1dab5f9db3366c19df7ea45438b14,kubernetes.io/config.seen: 2024-06-10T10:38:30.793992583Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-6wmkd,Uid:f8a1e0dc-e561-4def-9787-c5d0eda08fda,Namespace:default,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1718016606989447824,Lab
els:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:41:21.254050246Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-565925,Uid:5b7f7bf516814f2c5dbe0fbc6daa3a18,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718016585696564054,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{kubernetes.io/config.hash: 5b7f7bf516814f2c5dbe0fbc6daa3a18,kubernetes.io/config.seen: 2024-06-10T10:49:32.651608997Z,kubernetes.io/config.source: fil
e,},RuntimeHandler:,},&PodSandbox{Id:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-545cf,Uid:7564efde-b96c-48b3-b194-bca695f7ae95,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1718016573337503685,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:49.597228433Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wn6nh,Uid:9e47f047-e98b-48c8-8a33-8f790a3e8017,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1718016573321801242,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubern
etes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:49.589282044Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-565925,Uid:0160bc841c85a002ebb521cea7065bc7,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1718016573309292584,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0160bc841c85a002ebb521cea7065bc7,kubernetes.io/config.seen: 2024-06-10T10:38:30.793997530Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&PodSandboxMetadata{Name:etcd-ha-565925,Uid:24c16c67f513f809f76a7bbd749e01f3,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1718016573303434210,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.208:2379,kubernetes.io/config.hash: 24c16c67f513f809f76a7bbd749e01f3,kubernetes.io/config.seen: 2024-06-10T10:38:30.793999653Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&PodSandboxMetadata{Name:kube-proxy-wdjhn,Uid:da3ac11b-0906-4695-80b1-f3f4f1a34de1,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,Cre
atedAt:1718016573242462220,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T10:38:44.034881743Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4c4e015b-8558-4647-a793-cb7235bb3458 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.093776191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16855248-09e7-4023-84c5-775559472d3a name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.093850496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16855248-09e7-4023-84c5-775559472d3a name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.094522935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc71731db34e54cc482f777258e552da4eb09b06301d22a96d4b5b7a1c09553a,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718017278826933875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7adba8b85d829b73a4b55001ec3a5549587e6b92cba7280bc5042eb1d764a2,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718017260825295406,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307518a974a9d81484017b6def4bcb35734f73f49643e1e3b41a2e1bb4d72619,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718017258826262478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667bd353fdda8be94426be8fb00d6739c3209268ea60a077feb6d24afc39af7,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718017242826163325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596f78d1f9f1a08bb0774a454ecd00ac562ae38017ea807582d9fe153c3ae83,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017149836607469,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5196758907fd1be55dfb4db8fdf71169c2226b54a2688835b92147fbaf8b52,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017149822916335,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ab46f3546bcfed28150552839b3cc283c32cb309a33ebb0ea67459079f5eb,PodSandboxId:20e1ade57d2542a1c7331c6dcfc2127d5be744e132190337c981b0fc4bed8da4,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718017112116718602,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.
kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9700dc3bf19471a12df22302b585640a8bba48b9c13b6f07e34797964a72bf9,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718017078747702884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:dac5139d75fe4e3d41205aa1803b8091a016d26e34b621f388426b4f28c9788f,PodSandboxId:16504243eb24ec6452badeef3694a359b10b881b6cbee11932acfb706fa05569,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718017079128195775,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a6bfc115b83fe8e36c67f3ce6d994b1cce135626a1c3a20165012107bebf06ca,PodSandboxId:868f5b2fa2a9647cf0d9f242ebbb87f7167e73566a4cfd589ec6112e3a3d61c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718017079118362076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b445c2d316f603033fc8e810
ba508bba9398ff7de68e41b686958ee2cb8fcfd,PodSandboxId:b49a011721881d8ce465640daa30b2d69b6cae387aca077c70daa38e2c3cc389,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718017078925256217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbcde3714e14329d6337427e054bc34da36c1a1a94a6aad9cc9ae1b179eebdd,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718017078902111617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c407eac5c82e6b20991f6cfe3e6f662eb2f7cbcc8a79638d675d463c8120dd,PodSandboxId:cea0105c4b4e7225b5371932b06a504c5cbf20c43d948908687c1708dd82410d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718017078803503346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d162210bec339f31d4b24d962ad510c8c5712d5173ea2a82ebe50e463194bf12,PodSandboxId:dd0f08cb4bc7915dd3c4046a654abb28b7711f688615e361aaf3b5a874d439d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718017078580667689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016607125233498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1718016585794533885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b392205cc4da
349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718016573870537430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b0
4258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573906904776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718016573751979920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718016573784490891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573866917347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16855248-09e7-4023-84c5-775559472d3a name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.098857211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd32911c-cda2-45b7-a98f-bab3174d9416 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.098915556Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd32911c-cda2-45b7-a98f-bab3174d9416 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.099694339Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c384845e-22d3-4e03-a1d4-2378e2b47153 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.100204957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718017672100183487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c384845e-22d3-4e03-a1d4-2378e2b47153 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.101302598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31571192-c84c-44b1-b51d-1d0f14fd9fe8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.101368064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31571192-c84c-44b1-b51d-1d0f14fd9fe8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:07:52 ha-565925 crio[6561]: time="2024-06-10 11:07:52.101834208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc71731db34e54cc482f777258e552da4eb09b06301d22a96d4b5b7a1c09553a,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:6,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718017278826933875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a7adba8b85d829b73a4b55001ec3a5549587e6b92cba7280bc5042eb1d764a2,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718017260825295406,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:307518a974a9d81484017b6def4bcb35734f73f49643e1e3b41a2e1bb4d72619,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718017258826262478,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4667bd353fdda8be94426be8fb00d6739c3209268ea60a077feb6d24afc39af7,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718017242826163325,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9596f78d1f9f1a08bb0774a454ecd00ac562ae38017ea807582d9fe153c3ae83,PodSandboxId:8777e890e5cc662fe143a51eeebf243bac07d02db168f69e8fbe6341b9e5d111,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017149836607469,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d811c4cb2aa091785cd31dce6f7bed4f,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df5196758907fd1be55dfb4db8fdf71169c2226b54a2688835b92147fbaf8b52,PodSandboxId:555188fecd0274a950ee2c75d96e55ba0e8e22f259a08df1f022bdcbea700980,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017149822916335,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d1dab5f9db3366c19df7ea45438b14,},Annotations:map[string]string{io.kubernetes.container.hash: 9f38f78c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14ab46f3546bcfed28150552839b3cc283c32cb309a33ebb0ea67459079f5eb,PodSandboxId:20e1ade57d2542a1c7331c6dcfc2127d5be744e132190337c981b0fc4bed8da4,Metadata:&ContainerMetadata{Name:busybox,Attempt:2,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718017112116718602,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.
kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9700dc3bf19471a12df22302b585640a8bba48b9c13b6f07e34797964a72bf9,PodSandboxId:9384a3551e3f6663c95c30015955798fba04704226e06db5bf249fb54feaf99e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718017078747702884,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca60a36-c445-4520-b857-7df39dfed848,},Annotations:map[string]string{io.kubernetes.container.hash: dd284142,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.po
d.terminationGracePeriod: 30,},},&Container{Id:dac5139d75fe4e3d41205aa1803b8091a016d26e34b621f388426b4f28c9788f,PodSandboxId:16504243eb24ec6452badeef3694a359b10b881b6cbee11932acfb706fa05569,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1718017079128195775,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:a6bfc115b83fe8e36c67f3ce6d994b1cce135626a1c3a20165012107bebf06ca,PodSandboxId:868f5b2fa2a9647cf0d9f242ebbb87f7167e73566a4cfd589ec6112e3a3d61c2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718017079118362076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b445c2d316f603033fc8e810
ba508bba9398ff7de68e41b686958ee2cb8fcfd,PodSandboxId:b49a011721881d8ce465640daa30b2d69b6cae387aca077c70daa38e2c3cc389,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718017078925256217,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3cbcde3714e14329d6337427e054bc34da36c1a1a94a6aad9cc9ae1b179eebdd,PodSandboxId:2301576baf44ec2b48a39ee83fb5a9bcb8a8f9655e5d368ac4b1373f193c70f1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:5,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718017078902111617,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-rnn59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9141e131-eebc-4f51-8b55-46ff649ffaee,},Annotations:map[string]string{io.kubernetes.container.hash: afecbc30,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c407eac5c82e6b20991f6cfe3e6f662eb2f7cbcc8a79638d675d463c8120dd,PodSandboxId:cea0105c4b4e7225b5371932b06a504c5cbf20c43d948908687c1708dd82410d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718017078803503346,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminatio
nMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d162210bec339f31d4b24d962ad510c8c5712d5173ea2a82ebe50e463194bf12,PodSandboxId:dd0f08cb4bc7915dd3c4046a654abb28b7711f688615e361aaf3b5a874d439d0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718017078580667689,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51e293a1cc869311fd15c723f109226cd7cf9e58f9c0ce73b81e66e643ba0824,PodSandboxId:276099ec692d58a43f2137fdb8c495cf2b238659587a093f63455929cc0159f8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718016607125233498,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-6wmkd,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f8a1e0dc-e561-4def-9787-c5d0eda08fda,},Annotations:map[string]string{io.kubernetes.container.hash: 8230443c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:031c3214a18181965175ad1ce4be9461912a8f144a9fd8499e18a516fbc4c24b,PodSandboxId:cfe7af207d454e48b4c9a313d5fffb0f03c0fb7b7fb6a479a1b43dc5e8d3fa0f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1718016585794533885,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b7f7bf516814f2c5dbe0fbc6daa3a18,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6b392205cc4da
349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566,PodSandboxId:92b6f53b325e00531ba020a4091debef83c310509523dcadd98455c576589d1a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718016573870537430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wdjhn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da3ac11b-0906-4695-80b1-f3f4f1a34de1,},Annotations:map[string]string{io.kubernetes.container.hash: ae58608d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b0
4258e36921b56cf5,PodSandboxId:3afe7674416b272a7b1f2f0765e713a115b8a9fc430d4da60440baaec31d798c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573906904776,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wn6nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e47f047-e98b-48c8-8a33-8f790a3e8017,},Annotations:map[string]string{io.kubernetes.container.hash: 83e8f640,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5,PodSandboxId:38fe7da9f5e494f306636e4ee0f552c2e44d43db2ef1a04a5ea901f66d5db1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718016573751979920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24c16c67f513f809f76a7bbd749e01f3,},Annotations:map[string]string{io.kubernetes.container.hash: 1f88a6f1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1,PodSandboxId:d3e905f6d61a711b33785d0332754575ce24a61714424b5bce0bd881d36495df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718016573784490891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-565925,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0160bc841c85a002ebb521cea7065bc7,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780,PodSandboxId:d74bbdd47986be76d0cd64bcc477460ea153199ba5f7b49f49a95d6c410dc7c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718016573866917347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-545cf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7564efde-b96c-48b3-b194-bca695f7ae95,},Annotations:map[string]string{io.kubernetes.container.hash: 1f269937,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"conta
inerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31571192-c84c-44b1-b51d-1d0f14fd9fe8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dc71731db34e5       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f   6 minutes ago       Running             kindnet-cni               6                   2301576baf44e       kindnet-rnn59
	0a7adba8b85d8       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   6 minutes ago       Running             kube-apiserver            6                   555188fecd027       kube-apiserver-ha-565925
	307518a974a9d       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   6 minutes ago       Running             kube-controller-manager   5                   8777e890e5cc6       kube-controller-manager-ha-565925
	4667bd353fdda       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   7 minutes ago       Running             storage-provisioner       6                   9384a3551e3f6       storage-provisioner
	9596f78d1f9f1       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   8 minutes ago       Exited              kube-controller-manager   4                   8777e890e5cc6       kube-controller-manager-ha-565925
	df5196758907f       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   8 minutes ago       Exited              kube-apiserver            5                   555188fecd027       kube-apiserver-ha-565925
	e14ab46f3546b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   9 minutes ago       Running             busybox                   2                   20e1ade57d254       busybox-fc5497c4f-6wmkd
	dac5139d75fe4       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   9 minutes ago       Running             kube-vip                  1                   16504243eb24e       kube-vip-ha-565925
	a6bfc115b83fe       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                2                   868f5b2fa2a96       kube-proxy-wdjhn
	5b445c2d316f6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   2                   b49a011721881       coredns-7db6d8ff4d-wn6nh
	3cbcde3714e14       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f   9 minutes ago       Exited              kindnet-cni               5                   2301576baf44e       kindnet-rnn59
	83c407eac5c82       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   cea0105c4b4e7       etcd-ha-565925
	f9700dc3bf194       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Exited              storage-provisioner       5                   9384a3551e3f6       storage-provisioner
	d162210bec339       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   dd0f08cb4bc79       kube-scheduler-ha-565925
	51e293a1cc869       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   17 minutes ago      Exited              busybox                   1                   276099ec692d5       busybox-fc5497c4f-6wmkd
	031c3214a1818       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12   18 minutes ago      Exited              kube-vip                  0                   cfe7af207d454       kube-vip-ha-565925
	0a358cc1cc573       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Exited              coredns                   1                   3afe7674416b2       coredns-7db6d8ff4d-wn6nh
	d6b392205cc4d       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   18 minutes ago      Exited              kube-proxy                1                   92b6f53b325e0       kube-proxy-wdjhn
	ca1b692a8aa8f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Exited              coredns                   1                   d74bbdd47986b       coredns-7db6d8ff4d-545cf
	d73c4fbf16547       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   18 minutes ago      Exited              kube-scheduler            1                   d3e905f6d61a7       kube-scheduler-ha-565925
	a51d5bffe5db4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   18 minutes ago      Exited              etcd                      1                   38fe7da9f5e49       etcd-ha-565925
	
	
	==> coredns [0a358cc1cc573aa1750cc09e41a48373a9ec054c4093e9b04258e36921b56cf5] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5b445c2d316f603033fc8e810ba508bba9398ff7de68e41b686958ee2cb8fcfd] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.9:57768->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.9:57768->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ca1b692a8aa8fde2427299395cc8e93b726bd59d0d7029f6a172775a2ed06780] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-565925
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T10_38_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:07:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:04:52 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:04:52 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:04:52 +0000   Mon, 10 Jun 2024 10:38:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:04:52 +0000   Mon, 10 Jun 2024 10:38:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.208
	  Hostname:    ha-565925
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 81e39b112b50436db5c7fc16ce8eb53e
	  System UUID:                81e39b11-2b50-436d-b5c7-fc16ce8eb53e
	  Boot ID:                    afd4fe8d-84f7-41ff-9890-dc78b1ff1343
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6wmkd              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 coredns-7db6d8ff4d-545cf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 coredns-7db6d8ff4d-wn6nh             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-ha-565925                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kindnet-rnn59                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      29m
	  kube-system                 kube-apiserver-ha-565925             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-ha-565925    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-wdjhn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-ha-565925             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-vip-ha-565925                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 9m6s               kube-proxy       
	  Normal   Starting                 29m                kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 29m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     29m                kubelet          Node ha-565925 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  29m                kubelet          Node ha-565925 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29m                kubelet          Node ha-565925 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           29m                node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   NodeReady                29m                kubelet          Node ha-565925 status is now: NodeReady
	  Normal   RegisteredNode           27m                node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           26m                node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Warning  ContainerGCFailed        10m (x5 over 19m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           8m1s               node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           6m37s              node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	  Normal   RegisteredNode           75s                node-controller  Node ha-565925 event: Registered Node ha-565925 in Controller
	
	
	Name:               ha-565925-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_39_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:39:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:07:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:04:51 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:04:51 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:04:51 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:04:51 +0000   Mon, 10 Jun 2024 10:53:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    ha-565925-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55a76fcaaea54bebb8694a2ff5e7d2ea
	  System UUID:                55a76fca-aea5-4beb-b869-4a2ff5e7d2ea
	  Boot ID:                    f2031124-7282-4f77-956b-81d80d2807d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8g67g                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 etcd-ha-565925-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-9jv7q                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      28m
	  kube-system                 kube-apiserver-ha-565925-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-565925-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-vbgnx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-ha-565925-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-565925-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 8m3s               kube-proxy       
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 28m                kube-proxy       
	  Normal   NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node ha-565925-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           28m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   RegisteredNode           27m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   RegisteredNode           26m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   NodeNotReady             24m                node-controller  Node ha-565925-m02 status is now: NodeNotReady
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-565925-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node ha-565925-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           17m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   NodeNotReady             15m                node-controller  Node ha-565925-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        9m54s              kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           8m1s               node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   RegisteredNode           6m37s              node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	  Normal   RegisteredNode           75s                node-controller  Node ha-565925-m02 event: Registered Node ha-565925-m02 in Controller
	
	
	Name:               ha-565925-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T10_41_59_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 10:41:58 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 10:52:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 10 Jun 2024 10:51:52 +0000   Mon, 10 Jun 2024 10:52:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    ha-565925-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5196e1f9b5684ae78368fe8d66c3d24c
	  System UUID:                5196e1f9-b568-4ae7-8368-fe8d66c3d24c
	  Boot ID:                    fa33354e-1710-42c3-b31e-616fe87f501e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pnv2t    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-lkf5b              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-proxy-dpsbw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 15m                kube-proxy       
	  Normal   Starting                 25m                kube-proxy       
	  Normal   NodeHasSufficientPID     25m (x2 over 25m)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  25m (x2 over 25m)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    25m (x2 over 25m)  kubelet          Node ha-565925-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           25m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           25m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           25m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   NodeReady                25m                kubelet          Node ha-565925-m04 status is now: NodeReady
	  Normal   RegisteredNode           17m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           17m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           16m                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   Starting                 16m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m (x3 over 16m)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m (x3 over 16m)  kubelet          Node ha-565925-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m (x3 over 16m)  kubelet          Node ha-565925-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 16m (x2 over 16m)  kubelet          Node ha-565925-m04 has been rebooted, boot id: fa33354e-1710-42c3-b31e-616fe87f501e
	  Normal   NodeReady                16m (x2 over 16m)  kubelet          Node ha-565925-m04 status is now: NodeReady
	  Normal   NodeNotReady             14m (x2 over 16m)  node-controller  Node ha-565925-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           8m1s               node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           6m37s              node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	  Normal   RegisteredNode           75s                node-controller  Node ha-565925-m04 event: Registered Node ha-565925-m04 in Controller
	
	
	Name:               ha-565925-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-565925-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=ha-565925
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T11_06_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:06:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-565925-m05
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:07:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:06:50 +0000   Mon, 10 Jun 2024 11:06:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:06:50 +0000   Mon, 10 Jun 2024 11:06:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:06:50 +0000   Mon, 10 Jun 2024 11:06:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:06:50 +0000   Mon, 10 Jun 2024 11:06:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    ha-565925-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2691f9f5452744ac9295ff819a2e0871
	  System UUID:                2691f9f5-4527-44ac-9295-ff819a2e0871
	  Boot ID:                    cc0cbe3a-405a-474a-9adf-c3fbaaa65f9e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gr9tm                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 etcd-ha-565925-m05                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         92s
	  kube-system                 kindnet-tgtj9                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      93s
	  kube-system                 kube-apiserver-ha-565925-m05             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-controller-manager-ha-565925-m05    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-72kn2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-scheduler-ha-565925-m05             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-vip-ha-565925-m05                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 90s                kube-proxy       
	  Normal  NodeHasSufficientMemory  93s (x9 over 93s)  kubelet          Node ha-565925-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s (x7 over 93s)  kubelet          Node ha-565925-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s (x7 over 93s)  kubelet          Node ha-565925-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  93s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           92s                node-controller  Node ha-565925-m05 event: Registered Node ha-565925-m05 in Controller
	  Normal  RegisteredNode           91s                node-controller  Node ha-565925-m05 event: Registered Node ha-565925-m05 in Controller
	  Normal  RegisteredNode           75s                node-controller  Node ha-565925-m05 event: Registered Node ha-565925-m05 in Controller
	
	
	==> dmesg <==
	[  +7.135890] systemd-fstab-generator[1360]: Ignoring "noauto" option for root device
	[  +0.082129] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.392312] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.014769] kauditd_printk_skb: 43 callbacks suppressed
	[  +9.917879] kauditd_printk_skb: 21 callbacks suppressed
	[Jun10 10:49] systemd-fstab-generator[3825]: Ignoring "noauto" option for root device
	[  +0.169090] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[  +0.188008] systemd-fstab-generator[3851]: Ignoring "noauto" option for root device
	[  +0.156438] systemd-fstab-generator[3863]: Ignoring "noauto" option for root device
	[  +0.268788] systemd-fstab-generator[3891]: Ignoring "noauto" option for root device
	[  +0.739516] systemd-fstab-generator[3989]: Ignoring "noauto" option for root device
	[ +12.921754] kauditd_printk_skb: 218 callbacks suppressed
	[ +10.073147] kauditd_printk_skb: 1 callbacks suppressed
	[Jun10 10:50] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.065204] kauditd_printk_skb: 6 callbacks suppressed
	[Jun10 10:56] systemd-fstab-generator[6466]: Ignoring "noauto" option for root device
	[  +0.159614] systemd-fstab-generator[6478]: Ignoring "noauto" option for root device
	[  +0.189354] systemd-fstab-generator[6492]: Ignoring "noauto" option for root device
	[  +0.153693] systemd-fstab-generator[6504]: Ignoring "noauto" option for root device
	[  +0.292364] systemd-fstab-generator[6532]: Ignoring "noauto" option for root device
	[Jun10 10:57] systemd-fstab-generator[6677]: Ignoring "noauto" option for root device
	[  +0.096364] kauditd_printk_skb: 106 callbacks suppressed
	[  +5.143063] kauditd_printk_skb: 12 callbacks suppressed
	[Jun10 10:58] kauditd_printk_skb: 90 callbacks suppressed
	[ +27.067539] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [83c407eac5c82e6b20991f6cfe3e6f662eb2f7cbcc8a79638d675d463c8120dd] <==
	{"level":"info","ts":"2024-06-10T11:06:20.161802Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"4bfcc51356ad2e46"}
	{"level":"info","ts":"2024-06-10T11:06:20.163215Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"4bfcc51356ad2e46"}
	{"level":"info","ts":"2024-06-10T11:06:20.163441Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"4bfcc51356ad2e46"}
	{"level":"info","ts":"2024-06-10T11:06:20.163487Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"4bfcc51356ad2e46"}
	{"level":"info","ts":"2024-06-10T11:06:20.163581Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"4bfcc51356ad2e46","remote-peer-urls":["https://192.168.39.27:2380"]}
	{"level":"info","ts":"2024-06-10T11:06:20.16366Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"7fe6bf77aaafe0f6","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"4bfcc51356ad2e46"}
	{"level":"info","ts":"2024-06-10T11:06:20.163937Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"4bfcc51356ad2e46"}
	{"level":"warn","ts":"2024-06-10T11:06:20.736032Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"4bfcc51356ad2e46","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-06-10T11:06:20.994237Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.27:2380/version","remote-member-id":"4bfcc51356ad2e46","error":"Get \"https://192.168.39.27:2380/version\": dial tcp 192.168.39.27:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-06-10T11:06:20.99433Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"4bfcc51356ad2e46","error":"Get \"https://192.168.39.27:2380/version\": dial tcp 192.168.39.27:2380: connect: connection refused"}
	{"level":"info","ts":"2024-06-10T11:06:21.381008Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"4bfcc51356ad2e46"}
	{"level":"info","ts":"2024-06-10T11:06:21.438888Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"4bfcc51356ad2e46"}
	{"level":"info","ts":"2024-06-10T11:06:21.439044Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"4bfcc51356ad2e46"}
	{"level":"info","ts":"2024-06-10T11:06:21.49652Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7fe6bf77aaafe0f6","to":"4bfcc51356ad2e46","stream-type":"stream Message"}
	{"level":"info","ts":"2024-06-10T11:06:21.496574Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"4bfcc51356ad2e46"}
	{"level":"info","ts":"2024-06-10T11:06:21.542418Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"7fe6bf77aaafe0f6","to":"4bfcc51356ad2e46","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-06-10T11:06:21.542504Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"4bfcc51356ad2e46"}
	{"level":"warn","ts":"2024-06-10T11:06:21.633938Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.27:42826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-06-10T11:06:21.668784Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.27:42834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-06-10T11:06:21.67894Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.27:42842","server-name":"","error":"read tcp 192.168.39.208:2380->192.168.39.27:42842: read: connection reset by peer"}
	{"level":"warn","ts":"2024-06-10T11:06:21.680395Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.27:42852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-06-10T11:06:21.726948Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"4bfcc51356ad2e46","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-06-10T11:06:22.727491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 switched to configuration voters=(5475467933824921158 8156306394685010700 9216264208145965302)"}
	{"level":"info","ts":"2024-06-10T11:06:22.727602Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"fb8a78b66dce1ac7","local-member-id":"7fe6bf77aaafe0f6"}
	{"level":"info","ts":"2024-06-10T11:06:22.72763Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"7fe6bf77aaafe0f6","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"4bfcc51356ad2e46"}
	
	
	==> etcd [a51d5bffe5db4200ac6336c7ca76cc182a95d91ff55c5a12955c341dd76f23c5] <==
	{"level":"info","ts":"2024-06-10T10:54:42.177629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 [term 3] starts to transfer leadership to 71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.177669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 sends MsgTimeoutNow to 71310573b672730c immediately as 71310573b672730c already has up-to-date log"}
	{"level":"info","ts":"2024-06-10T10:54:42.180133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 [term: 3] received a MsgVote message with higher term from 71310573b672730c [term: 4]"}
	{"level":"info","ts":"2024-06-10T10:54:42.180187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 became follower at term 4"}
	{"level":"info","ts":"2024-06-10T10:54:42.180202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7fe6bf77aaafe0f6 [logterm: 3, index: 3624, vote: 0] cast MsgVote for 71310573b672730c [logterm: 3, index: 3624] at term 4"}
	{"level":"info","ts":"2024-06-10T10:54:42.180211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7fe6bf77aaafe0f6 lost leader 7fe6bf77aaafe0f6 at term 4"}
	{"level":"info","ts":"2024-06-10T10:54:42.181914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7fe6bf77aaafe0f6 elected leader 71310573b672730c at term 4"}
	{"level":"info","ts":"2024-06-10T10:54:42.278708Z","caller":"etcdserver/server.go:1448","msg":"leadership transfer finished","local-member-id":"7fe6bf77aaafe0f6","old-leader-member-id":"7fe6bf77aaafe0f6","new-leader-member-id":"71310573b672730c","took":"101.126124ms"}
	{"level":"info","ts":"2024-06-10T10:54:42.278948Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"71310573b672730c"}
	{"level":"warn","ts":"2024-06-10T10:54:42.279946Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.280007Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"71310573b672730c"}
	{"level":"warn","ts":"2024-06-10T10:54:42.281365Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.281473Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.281547Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"warn","ts":"2024-06-10T10:54:42.281726Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","error":"context canceled"}
	{"level":"warn","ts":"2024-06-10T10:54:42.281815Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"71310573b672730c","error":"failed to read 71310573b672730c on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-06-10T10:54:42.281874Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"warn","ts":"2024-06-10T10:54:42.281999Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c","error":"context canceled"}
	{"level":"info","ts":"2024-06-10T10:54:42.282037Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"7fe6bf77aaafe0f6","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.282068Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"71310573b672730c"}
	{"level":"info","ts":"2024-06-10T10:54:42.28884Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"warn","ts":"2024-06-10T10:54:42.289207Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.230:49300","server-name":"","error":"read tcp 192.168.39.208:2380->192.168.39.230:49300: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T10:54:42.289267Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.230:49290","server-name":"","error":"read tcp 192.168.39.208:2380->192.168.39.230:49290: use of closed network connection"}
	{"level":"info","ts":"2024-06-10T10:54:43.289535Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.208:2380"}
	{"level":"info","ts":"2024-06-10T10:54:43.289584Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-565925","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.208:2380"],"advertise-client-urls":["https://192.168.39.208:2379"]}
	
	
	==> kernel <==
	 11:07:52 up 29 min,  0 users,  load average: 0.08, 0.20, 0.24
	Linux ha-565925 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3cbcde3714e14329d6337427e054bc34da36c1a1a94a6aad9cc9ae1b179eebdd] <==
	I0610 10:57:59.457606       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0610 10:58:09.681177       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0610 10:58:19.691071       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0610 10:58:20.691876       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0610 10:58:22.693297       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0610 10:58:25.694650       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xe3b
	
	
	==> kindnet [dc71731db34e54cc482f777258e552da4eb09b06301d22a96d4b5b7a1c09553a] <==
	I0610 11:07:20.160726       1 main.go:250] Node ha-565925-m05 has CIDR [10.244.2.0/24] 
	I0610 11:07:30.174029       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 11:07:30.174212       1 main.go:227] handling current node
	I0610 11:07:30.174245       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 11:07:30.174265       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 11:07:30.174394       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 11:07:30.174415       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 11:07:30.174478       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0610 11:07:30.174496       1 main.go:250] Node ha-565925-m05 has CIDR [10.244.2.0/24] 
	I0610 11:07:40.190098       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 11:07:40.190134       1 main.go:227] handling current node
	I0610 11:07:40.190145       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 11:07:40.190150       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 11:07:40.190263       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 11:07:40.190268       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 11:07:40.190310       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0610 11:07:40.190317       1 main.go:250] Node ha-565925-m05 has CIDR [10.244.2.0/24] 
	I0610 11:07:50.205620       1 main.go:223] Handling node with IPs: map[192.168.39.208:{}]
	I0610 11:07:50.205847       1 main.go:227] handling current node
	I0610 11:07:50.205885       1 main.go:223] Handling node with IPs: map[192.168.39.230:{}]
	I0610 11:07:50.205906       1 main.go:250] Node ha-565925-m02 has CIDR [10.244.1.0/24] 
	I0610 11:07:50.206070       1 main.go:223] Handling node with IPs: map[192.168.39.229:{}]
	I0610 11:07:50.206098       1 main.go:250] Node ha-565925-m04 has CIDR [10.244.3.0/24] 
	I0610 11:07:50.206161       1 main.go:223] Handling node with IPs: map[192.168.39.27:{}]
	I0610 11:07:50.206179       1 main.go:250] Node ha-565925-m05 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0a7adba8b85d829b73a4b55001ec3a5549587e6b92cba7280bc5042eb1d764a2] <==
	I0610 11:01:02.712974       1 naming_controller.go:291] Starting NamingConditionController
	I0610 11:01:02.713016       1 establishing_controller.go:76] Starting EstablishingController
	I0610 11:01:02.713063       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0610 11:01:02.713104       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0610 11:01:02.713135       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0610 11:01:02.805178       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 11:01:02.807179       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 11:01:02.807263       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 11:01:02.807444       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 11:01:02.817685       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 11:01:02.819533       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 11:01:02.819576       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 11:01:02.828948       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 11:01:02.829649       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 11:01:02.829705       1 policy_source.go:224] refreshing policies
	I0610 11:01:02.831245       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 11:01:02.831283       1 aggregator.go:165] initial CRD sync complete...
	I0610 11:01:02.831301       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 11:01:02.831314       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 11:01:02.831331       1 cache.go:39] Caches are synced for autoregister controller
	I0610 11:01:02.877900       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 11:01:03.715623       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0610 11:01:04.045531       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.208 192.168.39.230]
	I0610 11:01:04.047009       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 11:01:04.053899       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [df5196758907fd1be55dfb4db8fdf71169c2226b54a2688835b92147fbaf8b52] <==
	I0610 10:59:10.014270       1 options.go:221] external host was not specified, using 192.168.39.208
	I0610 10:59:10.015144       1 server.go:148] Version: v1.30.1
	I0610 10:59:10.015205       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:59:10.307284       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0610 10:59:10.317527       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0610 10:59:10.319834       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0610 10:59:10.320113       1 instance.go:299] Using reconciler: lease
	I0610 10:59:10.319817       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0610 10:59:30.307560       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0610 10:59:30.307727       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0610 10:59:30.329812       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0610 10:59:30.329828       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [307518a974a9d81484017b6def4bcb35734f73f49643e1e3b41a2e1bb4d72619] <==
	I0610 11:01:19.948357       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-b5wq2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-b5wq2\": the object has been modified; please apply your changes to the latest version and try again"
	I0610 11:01:19.949124       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"42df7ab3-0fab-48a9-8edf-d2a6cd96dc74", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-b5wq2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-b5wq2": the object has been modified; please apply your changes to the latest version and try again
	I0610 11:01:19.969018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.375438ms"
	I0610 11:01:19.969202       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="88.299µs"
	I0610 11:06:15.597277       1 taint_eviction.go:113] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-fc5497c4f-pnv2t"
	I0610 11:06:15.650809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="143.494µs"
	I0610 11:06:15.712698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.487384ms"
	I0610 11:06:15.737442       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.592543ms"
	I0610 11:06:15.737681       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.418µs"
	I0610 11:06:15.737843       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.474µs"
	I0610 11:06:16.034300       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="286.674474ms"
	I0610 11:06:16.034531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="106.13µs"
	I0610 11:06:16.051523       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.908007ms"
	I0610 11:06:16.051647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.861µs"
	I0610 11:06:19.119425       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-565925-m05\" does not exist"
	I0610 11:06:19.140115       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-565925-m05" podCIDRs=["10.244.2.0/24"]
	I0610 11:06:20.554809       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-565925-m05"
	I0610 11:06:21.183671       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.386µs"
	I0610 11:06:24.105910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.425µs"
	I0610 11:06:25.598936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.721µs"
	I0610 11:06:30.726450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.658µs"
	I0610 11:06:30.747510       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.723µs"
	I0610 11:06:30.773015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.37µs"
	I0610 11:06:34.193838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.156759ms"
	I0610 11:06:34.194127       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.772µs"
	
	
	==> kube-controller-manager [9596f78d1f9f1a08bb0774a454ecd00ac562ae38017ea807582d9fe153c3ae83] <==
	I0610 10:59:10.432358       1 serving.go:380] Generated self-signed cert in-memory
	I0610 10:59:10.684166       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0610 10:59:10.684196       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:59:10.686805       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 10:59:10.686985       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 10:59:10.687020       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 10:59:10.687000       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0610 10:59:31.334956       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.208:8443/healthz\": dial tcp 192.168.39.208:8443: connect: connection refused"
	
	
	==> kube-proxy [a6bfc115b83fe8e36c67f3ce6d994b1cce135626a1c3a20165012107bebf06ca] <==
	W0610 10:59:01.722082       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:01.722242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:01.722442       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:59:01.722653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:01.722727       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:59:04.793778       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:04.794073       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:59:10.937344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:10.937460       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:59:14.010071       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:14.010130       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:14.010208       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:59:14.010492       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:14.010603       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:59:26.297384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:26.297528       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:26.297682       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:59:35.515090       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:35.515236       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:38.585911       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:59:38.586150       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:59:38.586728       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0610 11:00:04.598963       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:00:20.298775       1 shared_informer.go:320] Caches are synced for node config
	I0610 11:00:25.198886       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d6b392205cc4da349937ddbd66cd5e4e32466011eb011c9e13a0214e5aeab566] <==
	I0610 10:50:16.480570       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 10:50:16.480704       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 10:50:16.480733       1 server_linux.go:165] "Using iptables Proxier"
	I0610 10:50:16.483458       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 10:50:16.483693       1 server.go:872] "Version info" version="v1.30.1"
	I0610 10:50:16.483731       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 10:50:16.485415       1 config.go:192] "Starting service config controller"
	I0610 10:50:16.485458       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 10:50:16.485503       1 config.go:101] "Starting endpoint slice config controller"
	I0610 10:50:16.485519       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 10:50:16.486337       1 config.go:319] "Starting node config controller"
	I0610 10:50:16.486367       1 shared_informer.go:313] Waiting for caches to sync for node config
	W0610 10:50:19.481660       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.481945       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483161       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.254:8443: connect: no route to host"
	W0610 10:50:19.483323       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483424       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-565925&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0610 10:50:19.483590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0610 10:50:19.483667       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0610 10:50:20.586480       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 10:50:20.885886       1 shared_informer.go:320] Caches are synced for service config
	I0610 10:50:20.886651       1 shared_informer.go:320] Caches are synced for node config
	W0610 10:53:04.252585       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0610 10:53:04.252975       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0610 10:53:04.252979       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [d162210bec339f31d4b24d962ad510c8c5712d5173ea2a82ebe50e463194bf12] <==
	W0610 11:00:27.009004       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:27.009047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:27.059796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.208:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:27.059839       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.208:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:29.112975       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.208:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:29.113094       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.208:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:31.301433       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.208:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:31.301478       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.208:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:34.520628       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.208:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:34.520810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.208:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:37.060630       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:37.060669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:37.112601       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.208:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:37.112804       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.208:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:45.256863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:45.257037       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:45.916588       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:45.916650       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:49.584561       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:49.584636       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:51.537079       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:51.537193       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 11:00:57.757909       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.208:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 11:00:57.757987       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.208:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	I0610 11:01:05.365147       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d73c4fbf165479cdecba73b84a6325cb243bbb4cd1fed39f1c9a2c00168252e1] <==
	E0610 10:50:13.171445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:13.349389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:13.349453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.073188       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.073242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.208:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.293199       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.293274       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.208:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.389307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.208:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.389425       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.208:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:14.514209       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:14.514616       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.208:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:15.509656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	E0610 10:50:15.509725       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.208:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.208:8443: connect: connection refused
	W0610 10:50:17.832639       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 10:50:17.832863       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 10:50:17.833061       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 10:50:17.833139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 10:50:17.833237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 10:50:17.833265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0610 10:50:30.277918       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0610 10:52:02.506730       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pnv2t\": pod busybox-fc5497c4f-pnv2t is already assigned to node \"ha-565925-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-pnv2t" node="ha-565925-m04"
	E0610 10:52:02.508644       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod fc130e49-4bd9-4d39-86e2-5c9633be05c5(default/busybox-fc5497c4f-pnv2t) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-pnv2t"
	E0610 10:52:02.508944       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-pnv2t\": pod busybox-fc5497c4f-pnv2t is already assigned to node \"ha-565925-m04\"" pod="default/busybox-fc5497c4f-pnv2t"
	I0610 10:52:02.510673       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-pnv2t" node="ha-565925-m04"
	E0610 10:54:42.082619       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 10 11:06:57 ha-565925 kubelet[1367]: E0610 11:06:57.817414    1367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists"
	Jun 10 11:06:57 ha-565925 kubelet[1367]: E0610 11:06:57.817493    1367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:06:57 ha-565925 kubelet[1367]: E0610 11:06:57.817512    1367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:06:57 ha-565925 kubelet[1367]: E0610 11:06:57.817547    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\\\" already exists\"" pod="kube-system/coredns-7db6d8ff4d-545cf" podUID="7564efde-b96c-48b3-b194-bca695f7ae95"
	Jun 10 11:07:09 ha-565925 kubelet[1367]: E0610 11:07:09.815974    1367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists"
	Jun 10 11:07:09 ha-565925 kubelet[1367]: E0610 11:07:09.816336    1367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:07:09 ha-565925 kubelet[1367]: E0610 11:07:09.816419    1367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:07:09 ha-565925 kubelet[1367]: E0610 11:07:09.816536    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\\\" already exists\"" pod="kube-system/coredns-7db6d8ff4d-545cf" podUID="7564efde-b96c-48b3-b194-bca695f7ae95"
	Jun 10 11:07:23 ha-565925 kubelet[1367]: E0610 11:07:23.816207    1367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists"
	Jun 10 11:07:23 ha-565925 kubelet[1367]: E0610 11:07:23.816538    1367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:07:23 ha-565925 kubelet[1367]: E0610 11:07:23.816601    1367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:07:23 ha-565925 kubelet[1367]: E0610 11:07:23.816685    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\\\" already exists\"" pod="kube-system/coredns-7db6d8ff4d-545cf" podUID="7564efde-b96c-48b3-b194-bca695f7ae95"
	Jun 10 11:07:30 ha-565925 kubelet[1367]: E0610 11:07:30.828475    1367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:07:30 ha-565925 kubelet[1367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:07:30 ha-565925 kubelet[1367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:07:30 ha-565925 kubelet[1367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:07:30 ha-565925 kubelet[1367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 11:07:34 ha-565925 kubelet[1367]: E0610 11:07:34.817361    1367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists"
	Jun 10 11:07:34 ha-565925 kubelet[1367]: E0610 11:07:34.817409    1367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:07:34 ha-565925 kubelet[1367]: E0610 11:07:34.817430    1367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:07:34 ha-565925 kubelet[1367]: E0610 11:07:34.817464    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\\\" already exists\"" pod="kube-system/coredns-7db6d8ff4d-545cf" podUID="7564efde-b96c-48b3-b194-bca695f7ae95"
	Jun 10 11:07:48 ha-565925 kubelet[1367]: E0610 11:07:48.815948    1367 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists"
	Jun 10 11:07:48 ha-565925 kubelet[1367]: E0610 11:07:48.816287    1367 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:07:48 ha-565925 kubelet[1367]: E0610 11:07:48.816341    1367 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\" already exists" pod="kube-system/coredns-7db6d8ff4d-545cf"
	Jun 10 11:07:48 ha-565925 kubelet[1367]: E0610 11:07:48.816437    1367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-545cf_kube-system(7564efde-b96c-48b3-b194-bca695f7ae95)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_coredns-7db6d8ff4d-545cf_kube-system_7564efde-b96c-48b3-b194-bca695f7ae95_2\\\" already exists\"" pod="kube-system/coredns-7db6d8ff4d-545cf" podUID="7564efde-b96c-48b3-b194-bca695f7ae95"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:07:51.653204   33764 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19046-3880/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-565925 -n ha-565925
helpers_test.go:261: (dbg) Run:  kubectl --context ha-565925 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (140.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (304.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-862380
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-862380
E0610 11:16:57.914083   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-862380: exit status 82 (2m1.850422923s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-862380-m03"  ...
	* Stopping node "multinode-862380-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-862380" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-862380 --wait=true -v=8 --alsologtostderr
E0610 11:19:12.453141   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-862380 --wait=true -v=8 --alsologtostderr: (3m0.625195939s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-862380
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-862380 -n multinode-862380
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-862380 logs -n 25: (1.382267763s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp multinode-862380-m02:/home/docker/cp-test.txt                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4163337793/001/cp-test_multinode-862380-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp multinode-862380-m02:/home/docker/cp-test.txt                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380:/home/docker/cp-test_multinode-862380-m02_multinode-862380.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n multinode-862380 sudo cat                                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | /home/docker/cp-test_multinode-862380-m02_multinode-862380.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp multinode-862380-m02:/home/docker/cp-test.txt                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03:/home/docker/cp-test_multinode-862380-m02_multinode-862380-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n multinode-862380-m03 sudo cat                                   | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | /home/docker/cp-test_multinode-862380-m02_multinode-862380-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp testdata/cp-test.txt                                                | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp multinode-862380-m03:/home/docker/cp-test.txt                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4163337793/001/cp-test_multinode-862380-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp multinode-862380-m03:/home/docker/cp-test.txt                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380:/home/docker/cp-test_multinode-862380-m03_multinode-862380.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n multinode-862380 sudo cat                                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | /home/docker/cp-test_multinode-862380-m03_multinode-862380.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp multinode-862380-m03:/home/docker/cp-test.txt                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m02:/home/docker/cp-test_multinode-862380-m03_multinode-862380-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n multinode-862380-m02 sudo cat                                   | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | /home/docker/cp-test_multinode-862380-m03_multinode-862380-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-862380 node stop m03                                                          | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	| node    | multinode-862380 node start                                                             | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:15 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-862380                                                                | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:15 UTC |                     |
	| stop    | -p multinode-862380                                                                     | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:15 UTC |                     |
	| start   | -p multinode-862380                                                                     | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:17 UTC | 10 Jun 24 11:20 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-862380                                                                | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:20 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 11:17:27
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 11:17:27.742136   40730 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:17:27.742357   40730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:17:27.742369   40730 out.go:304] Setting ErrFile to fd 2...
	I0610 11:17:27.742376   40730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:17:27.742815   40730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:17:27.743426   40730 out.go:298] Setting JSON to false
	I0610 11:17:27.744279   40730 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3589,"bootTime":1718014659,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:17:27.744339   40730 start.go:139] virtualization: kvm guest
	I0610 11:17:27.746715   40730 out.go:177] * [multinode-862380] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:17:27.748511   40730 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:17:27.748446   40730 notify.go:220] Checking for updates...
	I0610 11:17:27.749869   40730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:17:27.751357   40730 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:17:27.752674   40730 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:17:27.754079   40730 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:17:27.755378   40730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:17:27.757046   40730 config.go:182] Loaded profile config "multinode-862380": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:17:27.757156   40730 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:17:27.757548   40730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:17:27.757589   40730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:17:27.773929   40730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
	I0610 11:17:27.774423   40730 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:17:27.775103   40730 main.go:141] libmachine: Using API Version  1
	I0610 11:17:27.775124   40730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:17:27.775534   40730 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:17:27.775762   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:17:27.813107   40730 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 11:17:27.814545   40730 start.go:297] selected driver: kvm2
	I0610 11:17:27.814561   40730 start.go:901] validating driver "kvm2" against &{Name:multinode-862380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-862380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:17:27.814739   40730 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:17:27.815129   40730 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:17:27.815205   40730 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 11:17:27.831006   40730 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 11:17:27.831638   40730 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:17:27.831683   40730 cni.go:84] Creating CNI manager for ""
	I0610 11:17:27.831694   40730 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 11:17:27.831744   40730 start.go:340] cluster config:
	{Name:multinode-862380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-862380 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:17:27.831849   40730 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:17:27.833997   40730 out.go:177] * Starting "multinode-862380" primary control-plane node in "multinode-862380" cluster
	I0610 11:17:27.835470   40730 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:17:27.835508   40730 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 11:17:27.835517   40730 cache.go:56] Caching tarball of preloaded images
	I0610 11:17:27.835595   40730 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:17:27.835606   40730 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 11:17:27.835721   40730 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/config.json ...
	I0610 11:17:27.835928   40730 start.go:360] acquireMachinesLock for multinode-862380: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:17:27.835974   40730 start.go:364] duration metric: took 24.029µs to acquireMachinesLock for "multinode-862380"
	I0610 11:17:27.835987   40730 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:17:27.835995   40730 fix.go:54] fixHost starting: 
	I0610 11:17:27.836236   40730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:17:27.836256   40730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:17:27.851223   40730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0610 11:17:27.851613   40730 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:17:27.852228   40730 main.go:141] libmachine: Using API Version  1
	I0610 11:17:27.852257   40730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:17:27.852673   40730 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:17:27.852909   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:17:27.853094   40730 main.go:141] libmachine: (multinode-862380) Calling .GetState
	I0610 11:17:27.854904   40730 fix.go:112] recreateIfNeeded on multinode-862380: state=Running err=<nil>
	W0610 11:17:27.854966   40730 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:17:27.856939   40730 out.go:177] * Updating the running kvm2 "multinode-862380" VM ...
	I0610 11:17:27.858425   40730 machine.go:94] provisionDockerMachine start ...
	I0610 11:17:27.858444   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:17:27.858675   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:27.861582   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:27.862219   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:27.862261   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:27.862452   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:17:27.862655   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:27.862811   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:27.862923   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:17:27.863077   40730 main.go:141] libmachine: Using SSH client type: native
	I0610 11:17:27.863383   40730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0610 11:17:27.863397   40730 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:17:27.985941   40730 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-862380
	
	I0610 11:17:27.985982   40730 main.go:141] libmachine: (multinode-862380) Calling .GetMachineName
	I0610 11:17:27.986204   40730 buildroot.go:166] provisioning hostname "multinode-862380"
	I0610 11:17:27.986245   40730 main.go:141] libmachine: (multinode-862380) Calling .GetMachineName
	I0610 11:17:27.986472   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:27.989280   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:27.989783   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:27.989812   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:27.989981   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:17:27.990173   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:27.990346   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:27.990491   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:17:27.990630   40730 main.go:141] libmachine: Using SSH client type: native
	I0610 11:17:27.990790   40730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0610 11:17:27.990810   40730 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-862380 && echo "multinode-862380" | sudo tee /etc/hostname
	I0610 11:17:28.128093   40730 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-862380
	
	I0610 11:17:28.128124   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:28.131162   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.131624   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:28.131638   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.131847   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:17:28.132033   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:28.132194   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:28.132351   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:17:28.132546   40730 main.go:141] libmachine: Using SSH client type: native
	I0610 11:17:28.132734   40730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0610 11:17:28.132752   40730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-862380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-862380/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-862380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:17:28.245813   40730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:17:28.245851   40730 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 11:17:28.245876   40730 buildroot.go:174] setting up certificates
	I0610 11:17:28.245886   40730 provision.go:84] configureAuth start
	I0610 11:17:28.245903   40730 main.go:141] libmachine: (multinode-862380) Calling .GetMachineName
	I0610 11:17:28.246189   40730 main.go:141] libmachine: (multinode-862380) Calling .GetIP
	I0610 11:17:28.248852   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.249297   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:28.249326   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.249486   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:28.252038   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.252459   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:28.252496   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.252592   40730 provision.go:143] copyHostCerts
	I0610 11:17:28.252621   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:17:28.252679   40730 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 11:17:28.252687   40730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:17:28.252754   40730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 11:17:28.252832   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:17:28.252849   40730 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 11:17:28.252855   40730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:17:28.252881   40730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 11:17:28.252928   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:17:28.252963   40730 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 11:17:28.252973   40730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:17:28.253002   40730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 11:17:28.253053   40730 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.multinode-862380 san=[127.0.0.1 192.168.39.100 localhost minikube multinode-862380]
	I0610 11:17:28.383126   40730 provision.go:177] copyRemoteCerts
	I0610 11:17:28.383208   40730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:17:28.383241   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:28.385756   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.386110   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:28.386141   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.386340   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:17:28.386519   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:28.386709   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:17:28.386808   40730 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/multinode-862380/id_rsa Username:docker}
	I0610 11:17:28.478398   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 11:17:28.478476   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:17:28.502382   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 11:17:28.502444   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 11:17:28.525674   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 11:17:28.525733   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 11:17:28.549055   40730 provision.go:87] duration metric: took 303.155105ms to configureAuth
	I0610 11:17:28.549081   40730 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:17:28.549263   40730 config.go:182] Loaded profile config "multinode-862380": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:17:28.549323   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:28.551963   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.552308   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:28.552337   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.552519   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:17:28.552709   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:28.552856   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:28.553005   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:17:28.553150   40730 main.go:141] libmachine: Using SSH client type: native
	I0610 11:17:28.553314   40730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0610 11:17:28.553328   40730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 11:18:59.348298   40730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 11:18:59.348320   40730 machine.go:97] duration metric: took 1m31.489881873s to provisionDockerMachine
	I0610 11:18:59.348333   40730 start.go:293] postStartSetup for "multinode-862380" (driver="kvm2")
	I0610 11:18:59.348348   40730 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:18:59.348363   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:18:59.348685   40730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:18:59.348715   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:18:59.351889   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.352391   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:18:59.352416   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.352570   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:18:59.352756   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:18:59.352914   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:18:59.353068   40730 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/multinode-862380/id_rsa Username:docker}
	I0610 11:18:59.441547   40730 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:18:59.445446   40730 command_runner.go:130] > NAME=Buildroot
	I0610 11:18:59.445468   40730 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 11:18:59.445474   40730 command_runner.go:130] > ID=buildroot
	I0610 11:18:59.445482   40730 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 11:18:59.445489   40730 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 11:18:59.445580   40730 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:18:59.445596   40730 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 11:18:59.445665   40730 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 11:18:59.445753   40730 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 11:18:59.445763   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 11:18:59.445862   40730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:18:59.454820   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:18:59.478152   40730 start.go:296] duration metric: took 129.804765ms for postStartSetup
	I0610 11:18:59.478230   40730 fix.go:56] duration metric: took 1m31.64223361s for fixHost
	I0610 11:18:59.478254   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:18:59.480834   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.481323   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:18:59.481348   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.481530   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:18:59.481738   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:18:59.481891   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:18:59.482040   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:18:59.482214   40730 main.go:141] libmachine: Using SSH client type: native
	I0610 11:18:59.482370   40730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0610 11:18:59.482380   40730 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:18:59.593390   40730 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718018339.576506580
	
	I0610 11:18:59.593412   40730 fix.go:216] guest clock: 1718018339.576506580
	I0610 11:18:59.593422   40730 fix.go:229] Guest: 2024-06-10 11:18:59.57650658 +0000 UTC Remote: 2024-06-10 11:18:59.478235633 +0000 UTC m=+91.771991040 (delta=98.270947ms)
	I0610 11:18:59.593456   40730 fix.go:200] guest clock delta is within tolerance: 98.270947ms
	I0610 11:18:59.593462   40730 start.go:83] releasing machines lock for "multinode-862380", held for 1m31.757479488s
	I0610 11:18:59.593493   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:18:59.593726   40730 main.go:141] libmachine: (multinode-862380) Calling .GetIP
	I0610 11:18:59.596171   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.596567   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:18:59.596595   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.596750   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:18:59.597319   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:18:59.597508   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:18:59.597566   40730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:18:59.597622   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:18:59.597731   40730 ssh_runner.go:195] Run: cat /version.json
	I0610 11:18:59.597752   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:18:59.600117   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.600383   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.600441   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:18:59.600469   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.600559   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:18:59.600723   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:18:59.600815   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:18:59.600838   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.600906   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:18:59.601031   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:18:59.601104   40730 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/multinode-862380/id_rsa Username:docker}
	I0610 11:18:59.601190   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:18:59.601332   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:18:59.601456   40730 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/multinode-862380/id_rsa Username:docker}
	I0610 11:18:59.681468   40730 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 11:18:59.711411   40730 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 11:18:59.712150   40730 ssh_runner.go:195] Run: systemctl --version
	I0610 11:18:59.718080   40730 command_runner.go:130] > systemd 252 (252)
	I0610 11:18:59.718123   40730 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 11:18:59.718187   40730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 11:18:59.876483   40730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 11:18:59.881982   40730 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 11:18:59.882034   40730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:18:59.882096   40730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:18:59.891440   40730 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0610 11:18:59.891466   40730 start.go:494] detecting cgroup driver to use...
	I0610 11:18:59.891556   40730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:18:59.909949   40730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:18:59.923584   40730 docker.go:217] disabling cri-docker service (if available) ...
	I0610 11:18:59.923649   40730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 11:18:59.937462   40730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 11:18:59.951603   40730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 11:19:00.105032   40730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 11:19:00.245811   40730 docker.go:233] disabling docker service ...
	I0610 11:19:00.245897   40730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 11:19:00.263047   40730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 11:19:00.276364   40730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 11:19:00.416005   40730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 11:19:00.557845   40730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 11:19:00.571660   40730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:19:00.589228   40730 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0610 11:19:00.589680   40730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 11:19:00.589735   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.599480   40730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 11:19:00.599548   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.609285   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.619165   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.629352   40730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:19:00.639278   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.649121   40730 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.659762   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.669552   40730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:19:00.678331   40730 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 11:19:00.678413   40730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:19:00.687477   40730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:19:00.819219   40730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 11:19:04.048749   40730 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.229491822s)
	I0610 11:19:04.048783   40730 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 11:19:04.048826   40730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 11:19:04.053224   40730 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0610 11:19:04.053256   40730 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 11:19:04.053266   40730 command_runner.go:130] > Device: 0,22	Inode: 1324        Links: 1
	I0610 11:19:04.053275   40730 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 11:19:04.053282   40730 command_runner.go:130] > Access: 2024-06-10 11:19:03.915710586 +0000
	I0610 11:19:04.053291   40730 command_runner.go:130] > Modify: 2024-06-10 11:19:03.915710586 +0000
	I0610 11:19:04.053298   40730 command_runner.go:130] > Change: 2024-06-10 11:19:03.915710586 +0000
	I0610 11:19:04.053303   40730 command_runner.go:130] >  Birth: -
	I0610 11:19:04.053355   40730 start.go:562] Will wait 60s for crictl version
	I0610 11:19:04.053406   40730 ssh_runner.go:195] Run: which crictl
	I0610 11:19:04.056899   40730 command_runner.go:130] > /usr/bin/crictl
	I0610 11:19:04.056982   40730 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:19:04.092521   40730 command_runner.go:130] > Version:  0.1.0
	I0610 11:19:04.092544   40730 command_runner.go:130] > RuntimeName:  cri-o
	I0610 11:19:04.092549   40730 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0610 11:19:04.092554   40730 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 11:19:04.092571   40730 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 11:19:04.092637   40730 ssh_runner.go:195] Run: crio --version
	I0610 11:19:04.122131   40730 command_runner.go:130] > crio version 1.29.1
	I0610 11:19:04.122154   40730 command_runner.go:130] > Version:        1.29.1
	I0610 11:19:04.122162   40730 command_runner.go:130] > GitCommit:      unknown
	I0610 11:19:04.122168   40730 command_runner.go:130] > GitCommitDate:  unknown
	I0610 11:19:04.122174   40730 command_runner.go:130] > GitTreeState:   clean
	I0610 11:19:04.122183   40730 command_runner.go:130] > BuildDate:      2024-06-06T15:30:03Z
	I0610 11:19:04.122189   40730 command_runner.go:130] > GoVersion:      go1.21.6
	I0610 11:19:04.122195   40730 command_runner.go:130] > Compiler:       gc
	I0610 11:19:04.122202   40730 command_runner.go:130] > Platform:       linux/amd64
	I0610 11:19:04.122211   40730 command_runner.go:130] > Linkmode:       dynamic
	I0610 11:19:04.122216   40730 command_runner.go:130] > BuildTags:      
	I0610 11:19:04.122221   40730 command_runner.go:130] >   containers_image_ostree_stub
	I0610 11:19:04.122226   40730 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0610 11:19:04.122229   40730 command_runner.go:130] >   btrfs_noversion
	I0610 11:19:04.122234   40730 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0610 11:19:04.122238   40730 command_runner.go:130] >   libdm_no_deferred_remove
	I0610 11:19:04.122244   40730 command_runner.go:130] >   seccomp
	I0610 11:19:04.122251   40730 command_runner.go:130] > LDFlags:          unknown
	I0610 11:19:04.122257   40730 command_runner.go:130] > SeccompEnabled:   true
	I0610 11:19:04.122267   40730 command_runner.go:130] > AppArmorEnabled:  false
	I0610 11:19:04.122331   40730 ssh_runner.go:195] Run: crio --version
	I0610 11:19:04.149089   40730 command_runner.go:130] > crio version 1.29.1
	I0610 11:19:04.149111   40730 command_runner.go:130] > Version:        1.29.1
	I0610 11:19:04.149120   40730 command_runner.go:130] > GitCommit:      unknown
	I0610 11:19:04.149126   40730 command_runner.go:130] > GitCommitDate:  unknown
	I0610 11:19:04.149132   40730 command_runner.go:130] > GitTreeState:   clean
	I0610 11:19:04.149143   40730 command_runner.go:130] > BuildDate:      2024-06-06T15:30:03Z
	I0610 11:19:04.149149   40730 command_runner.go:130] > GoVersion:      go1.21.6
	I0610 11:19:04.149155   40730 command_runner.go:130] > Compiler:       gc
	I0610 11:19:04.149161   40730 command_runner.go:130] > Platform:       linux/amd64
	I0610 11:19:04.149168   40730 command_runner.go:130] > Linkmode:       dynamic
	I0610 11:19:04.149177   40730 command_runner.go:130] > BuildTags:      
	I0610 11:19:04.149189   40730 command_runner.go:130] >   containers_image_ostree_stub
	I0610 11:19:04.149201   40730 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0610 11:19:04.149209   40730 command_runner.go:130] >   btrfs_noversion
	I0610 11:19:04.149218   40730 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0610 11:19:04.149229   40730 command_runner.go:130] >   libdm_no_deferred_remove
	I0610 11:19:04.149236   40730 command_runner.go:130] >   seccomp
	I0610 11:19:04.149252   40730 command_runner.go:130] > LDFlags:          unknown
	I0610 11:19:04.149259   40730 command_runner.go:130] > SeccompEnabled:   true
	I0610 11:19:04.149270   40730 command_runner.go:130] > AppArmorEnabled:  false
	I0610 11:19:04.152391   40730 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 11:19:04.154314   40730 main.go:141] libmachine: (multinode-862380) Calling .GetIP
	I0610 11:19:04.157326   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:19:04.157744   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:19:04.157769   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:19:04.157988   40730 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 11:19:04.163915   40730 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0610 11:19:04.164032   40730 kubeadm.go:877] updating cluster {Name:multinode-862380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-862380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:19:04.164182   40730 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:19:04.164340   40730 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:19:04.213931   40730 command_runner.go:130] > {
	I0610 11:19:04.213954   40730 command_runner.go:130] >   "images": [
	I0610 11:19:04.213960   40730 command_runner.go:130] >     {
	I0610 11:19:04.213971   40730 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0610 11:19:04.213977   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.213985   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0610 11:19:04.213991   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214002   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214014   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0610 11:19:04.214024   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0610 11:19:04.214031   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214039   40730 command_runner.go:130] >       "size": "65291810",
	I0610 11:19:04.214048   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.214055   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214065   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214075   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214081   40730 command_runner.go:130] >     },
	I0610 11:19:04.214087   40730 command_runner.go:130] >     {
	I0610 11:19:04.214098   40730 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0610 11:19:04.214108   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214117   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0610 11:19:04.214124   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214134   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214150   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0610 11:19:04.214163   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0610 11:19:04.214171   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214179   40730 command_runner.go:130] >       "size": "65908273",
	I0610 11:19:04.214186   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.214197   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214206   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214213   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214222   40730 command_runner.go:130] >     },
	I0610 11:19:04.214229   40730 command_runner.go:130] >     {
	I0610 11:19:04.214242   40730 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0610 11:19:04.214250   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214258   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0610 11:19:04.214265   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214275   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214289   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0610 11:19:04.214305   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0610 11:19:04.214313   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214322   40730 command_runner.go:130] >       "size": "1363676",
	I0610 11:19:04.214331   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.214338   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214347   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214354   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214363   40730 command_runner.go:130] >     },
	I0610 11:19:04.214369   40730 command_runner.go:130] >     {
	I0610 11:19:04.214383   40730 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0610 11:19:04.214393   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214403   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0610 11:19:04.214412   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214420   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214436   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0610 11:19:04.214457   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0610 11:19:04.214466   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214473   40730 command_runner.go:130] >       "size": "31470524",
	I0610 11:19:04.214483   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.214493   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214503   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214511   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214520   40730 command_runner.go:130] >     },
	I0610 11:19:04.214527   40730 command_runner.go:130] >     {
	I0610 11:19:04.214540   40730 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0610 11:19:04.214550   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214559   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0610 11:19:04.214568   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214576   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214592   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0610 11:19:04.214607   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0610 11:19:04.214616   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214624   40730 command_runner.go:130] >       "size": "61245718",
	I0610 11:19:04.214634   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.214645   40730 command_runner.go:130] >       "username": "nonroot",
	I0610 11:19:04.214655   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214664   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214670   40730 command_runner.go:130] >     },
	I0610 11:19:04.214679   40730 command_runner.go:130] >     {
	I0610 11:19:04.214691   40730 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0610 11:19:04.214700   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214708   40730 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0610 11:19:04.214716   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214723   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214739   40730 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0610 11:19:04.214755   40730 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0610 11:19:04.214764   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214773   40730 command_runner.go:130] >       "size": "150779692",
	I0610 11:19:04.214782   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.214789   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.214798   40730 command_runner.go:130] >       },
	I0610 11:19:04.214805   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214815   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214823   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214832   40730 command_runner.go:130] >     },
	I0610 11:19:04.214839   40730 command_runner.go:130] >     {
	I0610 11:19:04.214854   40730 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0610 11:19:04.214863   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214872   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0610 11:19:04.214880   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214887   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214903   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0610 11:19:04.214918   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0610 11:19:04.214927   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214936   40730 command_runner.go:130] >       "size": "117601759",
	I0610 11:19:04.214944   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.214951   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.214959   40730 command_runner.go:130] >       },
	I0610 11:19:04.214967   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214977   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214985   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214992   40730 command_runner.go:130] >     },
	I0610 11:19:04.215008   40730 command_runner.go:130] >     {
	I0610 11:19:04.215022   40730 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0610 11:19:04.215032   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.215043   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0610 11:19:04.215053   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215060   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.215088   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0610 11:19:04.215104   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0610 11:19:04.215110   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215117   40730 command_runner.go:130] >       "size": "112170310",
	I0610 11:19:04.215124   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.215134   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.215140   40730 command_runner.go:130] >       },
	I0610 11:19:04.215150   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.215155   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.215160   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.215165   40730 command_runner.go:130] >     },
	I0610 11:19:04.215169   40730 command_runner.go:130] >     {
	I0610 11:19:04.215177   40730 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0610 11:19:04.215183   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.215191   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0610 11:19:04.215197   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215204   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.215223   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0610 11:19:04.215235   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0610 11:19:04.215241   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215247   40730 command_runner.go:130] >       "size": "85933465",
	I0610 11:19:04.215254   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.215261   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.215268   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.215275   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.215280   40730 command_runner.go:130] >     },
	I0610 11:19:04.215286   40730 command_runner.go:130] >     {
	I0610 11:19:04.215296   40730 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0610 11:19:04.215306   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.215315   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0610 11:19:04.215323   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215331   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.215347   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0610 11:19:04.215363   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0610 11:19:04.215372   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215379   40730 command_runner.go:130] >       "size": "63026504",
	I0610 11:19:04.215388   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.215398   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.215406   40730 command_runner.go:130] >       },
	I0610 11:19:04.215413   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.215422   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.215430   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.215438   40730 command_runner.go:130] >     },
	I0610 11:19:04.215445   40730 command_runner.go:130] >     {
	I0610 11:19:04.215459   40730 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0610 11:19:04.215469   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.215480   40730 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0610 11:19:04.215488   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215496   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.215514   40730 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0610 11:19:04.215529   40730 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0610 11:19:04.215538   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215546   40730 command_runner.go:130] >       "size": "750414",
	I0610 11:19:04.215554   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.215562   40730 command_runner.go:130] >         "value": "65535"
	I0610 11:19:04.215570   40730 command_runner.go:130] >       },
	I0610 11:19:04.215577   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.215585   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.215595   40730 command_runner.go:130] >       "pinned": true
	I0610 11:19:04.215602   40730 command_runner.go:130] >     }
	I0610 11:19:04.215611   40730 command_runner.go:130] >   ]
	I0610 11:19:04.215619   40730 command_runner.go:130] > }
	I0610 11:19:04.215801   40730 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 11:19:04.215814   40730 crio.go:433] Images already preloaded, skipping extraction
	I0610 11:19:04.215873   40730 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:19:04.249585   40730 command_runner.go:130] > {
	I0610 11:19:04.249608   40730 command_runner.go:130] >   "images": [
	I0610 11:19:04.249614   40730 command_runner.go:130] >     {
	I0610 11:19:04.249628   40730 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0610 11:19:04.249635   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.249648   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0610 11:19:04.249653   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249658   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.249669   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0610 11:19:04.249679   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0610 11:19:04.249687   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249697   40730 command_runner.go:130] >       "size": "65291810",
	I0610 11:19:04.249705   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.249713   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.249724   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.249734   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.249741   40730 command_runner.go:130] >     },
	I0610 11:19:04.249747   40730 command_runner.go:130] >     {
	I0610 11:19:04.249758   40730 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0610 11:19:04.249767   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.249776   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0610 11:19:04.249781   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249788   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.249800   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0610 11:19:04.249816   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0610 11:19:04.249823   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249830   40730 command_runner.go:130] >       "size": "65908273",
	I0610 11:19:04.249842   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.249852   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.249861   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.249868   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.249874   40730 command_runner.go:130] >     },
	I0610 11:19:04.249880   40730 command_runner.go:130] >     {
	I0610 11:19:04.249892   40730 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0610 11:19:04.249900   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.249909   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0610 11:19:04.249916   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249924   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.249939   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0610 11:19:04.249955   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0610 11:19:04.249963   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249971   40730 command_runner.go:130] >       "size": "1363676",
	I0610 11:19:04.249981   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.249990   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250020   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250029   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250035   40730 command_runner.go:130] >     },
	I0610 11:19:04.250040   40730 command_runner.go:130] >     {
	I0610 11:19:04.250051   40730 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0610 11:19:04.250061   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250070   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0610 11:19:04.250081   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250089   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250106   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0610 11:19:04.250132   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0610 11:19:04.250141   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250149   40730 command_runner.go:130] >       "size": "31470524",
	I0610 11:19:04.250159   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.250169   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250176   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250186   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250192   40730 command_runner.go:130] >     },
	I0610 11:19:04.250199   40730 command_runner.go:130] >     {
	I0610 11:19:04.250212   40730 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0610 11:19:04.250222   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250232   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0610 11:19:04.250240   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250246   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250260   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0610 11:19:04.250276   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0610 11:19:04.250284   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250292   40730 command_runner.go:130] >       "size": "61245718",
	I0610 11:19:04.250301   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.250310   40730 command_runner.go:130] >       "username": "nonroot",
	I0610 11:19:04.250321   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250331   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250337   40730 command_runner.go:130] >     },
	I0610 11:19:04.250345   40730 command_runner.go:130] >     {
	I0610 11:19:04.250356   40730 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0610 11:19:04.250367   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250379   40730 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0610 11:19:04.250388   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250396   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250411   40730 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0610 11:19:04.250426   40730 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0610 11:19:04.250434   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250442   40730 command_runner.go:130] >       "size": "150779692",
	I0610 11:19:04.250453   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.250463   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.250475   40730 command_runner.go:130] >       },
	I0610 11:19:04.250486   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250496   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250505   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250513   40730 command_runner.go:130] >     },
	I0610 11:19:04.250519   40730 command_runner.go:130] >     {
	I0610 11:19:04.250530   40730 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0610 11:19:04.250539   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250548   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0610 11:19:04.250557   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250567   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250583   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0610 11:19:04.250598   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0610 11:19:04.250607   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250615   40730 command_runner.go:130] >       "size": "117601759",
	I0610 11:19:04.250625   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.250634   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.250641   40730 command_runner.go:130] >       },
	I0610 11:19:04.250651   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250658   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250668   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250677   40730 command_runner.go:130] >     },
	I0610 11:19:04.250683   40730 command_runner.go:130] >     {
	I0610 11:19:04.250696   40730 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0610 11:19:04.250706   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250717   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0610 11:19:04.250727   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250734   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250755   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0610 11:19:04.250771   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0610 11:19:04.250781   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250790   40730 command_runner.go:130] >       "size": "112170310",
	I0610 11:19:04.250800   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.250808   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.250815   40730 command_runner.go:130] >       },
	I0610 11:19:04.250825   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250833   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250843   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250849   40730 command_runner.go:130] >     },
	I0610 11:19:04.250857   40730 command_runner.go:130] >     {
	I0610 11:19:04.250868   40730 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0610 11:19:04.250878   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250887   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0610 11:19:04.250896   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250903   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250919   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0610 11:19:04.250938   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0610 11:19:04.250948   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250955   40730 command_runner.go:130] >       "size": "85933465",
	I0610 11:19:04.250964   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.250972   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250981   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250988   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.251001   40730 command_runner.go:130] >     },
	I0610 11:19:04.251011   40730 command_runner.go:130] >     {
	I0610 11:19:04.251022   40730 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0610 11:19:04.251032   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.251042   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0610 11:19:04.251051   40730 command_runner.go:130] >       ],
	I0610 11:19:04.251060   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.251075   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0610 11:19:04.251088   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0610 11:19:04.251098   40730 command_runner.go:130] >       ],
	I0610 11:19:04.251106   40730 command_runner.go:130] >       "size": "63026504",
	I0610 11:19:04.251115   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.251123   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.251132   40730 command_runner.go:130] >       },
	I0610 11:19:04.251139   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.251148   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.251155   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.251162   40730 command_runner.go:130] >     },
	I0610 11:19:04.251172   40730 command_runner.go:130] >     {
	I0610 11:19:04.251182   40730 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0610 11:19:04.251192   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.251204   40730 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0610 11:19:04.251212   40730 command_runner.go:130] >       ],
	I0610 11:19:04.251220   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.251235   40730 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0610 11:19:04.251249   40730 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0610 11:19:04.251256   40730 command_runner.go:130] >       ],
	I0610 11:19:04.251267   40730 command_runner.go:130] >       "size": "750414",
	I0610 11:19:04.251274   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.251282   40730 command_runner.go:130] >         "value": "65535"
	I0610 11:19:04.251291   40730 command_runner.go:130] >       },
	I0610 11:19:04.251298   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.251308   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.251318   40730 command_runner.go:130] >       "pinned": true
	I0610 11:19:04.251326   40730 command_runner.go:130] >     }
	I0610 11:19:04.251334   40730 command_runner.go:130] >   ]
	I0610 11:19:04.251341   40730 command_runner.go:130] > }
	I0610 11:19:04.251463   40730 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 11:19:04.251475   40730 cache_images.go:84] Images are preloaded, skipping loading
	I0610 11:19:04.251484   40730 kubeadm.go:928] updating node { 192.168.39.100 8443 v1.30.1 crio true true} ...
	I0610 11:19:04.251595   40730 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-862380 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-862380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:19:04.251675   40730 ssh_runner.go:195] Run: crio config
	I0610 11:19:04.284718   40730 command_runner.go:130] ! time="2024-06-10 11:19:04.267435288Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0610 11:19:04.290096   40730 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0610 11:19:04.296654   40730 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0610 11:19:04.296681   40730 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0610 11:19:04.296692   40730 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0610 11:19:04.296697   40730 command_runner.go:130] > #
	I0610 11:19:04.296707   40730 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0610 11:19:04.296721   40730 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0610 11:19:04.296728   40730 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0610 11:19:04.296738   40730 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0610 11:19:04.296745   40730 command_runner.go:130] > # reload'.
	I0610 11:19:04.296755   40730 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0610 11:19:04.296771   40730 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0610 11:19:04.296783   40730 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0610 11:19:04.296791   40730 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0610 11:19:04.296800   40730 command_runner.go:130] > [crio]
	I0610 11:19:04.296808   40730 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0610 11:19:04.296818   40730 command_runner.go:130] > # containers images, in this directory.
	I0610 11:19:04.296824   40730 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0610 11:19:04.296836   40730 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0610 11:19:04.296846   40730 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0610 11:19:04.296860   40730 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0610 11:19:04.296870   40730 command_runner.go:130] > # imagestore = ""
	I0610 11:19:04.296883   40730 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0610 11:19:04.296896   40730 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0610 11:19:04.296905   40730 command_runner.go:130] > storage_driver = "overlay"
	I0610 11:19:04.296911   40730 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0610 11:19:04.296920   40730 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0610 11:19:04.296937   40730 command_runner.go:130] > storage_option = [
	I0610 11:19:04.296959   40730 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0610 11:19:04.296964   40730 command_runner.go:130] > ]
	I0610 11:19:04.296974   40730 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0610 11:19:04.296984   40730 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0610 11:19:04.296992   40730 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0610 11:19:04.297004   40730 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0610 11:19:04.297012   40730 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0610 11:19:04.297020   40730 command_runner.go:130] > # always happen on a node reboot
	I0610 11:19:04.297024   40730 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0610 11:19:04.297037   40730 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0610 11:19:04.297045   40730 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0610 11:19:04.297050   40730 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0610 11:19:04.297057   40730 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0610 11:19:04.297067   40730 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0610 11:19:04.297077   40730 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0610 11:19:04.297083   40730 command_runner.go:130] > # internal_wipe = true
	I0610 11:19:04.297091   40730 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0610 11:19:04.297099   40730 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0610 11:19:04.297106   40730 command_runner.go:130] > # internal_repair = false
	I0610 11:19:04.297113   40730 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0610 11:19:04.297121   40730 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0610 11:19:04.297129   40730 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0610 11:19:04.297136   40730 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0610 11:19:04.297144   40730 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0610 11:19:04.297150   40730 command_runner.go:130] > [crio.api]
	I0610 11:19:04.297156   40730 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0610 11:19:04.297160   40730 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0610 11:19:04.297168   40730 command_runner.go:130] > # IP address on which the stream server will listen.
	I0610 11:19:04.297172   40730 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0610 11:19:04.297181   40730 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0610 11:19:04.297188   40730 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0610 11:19:04.297192   40730 command_runner.go:130] > # stream_port = "0"
	I0610 11:19:04.297200   40730 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0610 11:19:04.297205   40730 command_runner.go:130] > # stream_enable_tls = false
	I0610 11:19:04.297210   40730 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0610 11:19:04.297217   40730 command_runner.go:130] > # stream_idle_timeout = ""
	I0610 11:19:04.297235   40730 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0610 11:19:04.297248   40730 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0610 11:19:04.297258   40730 command_runner.go:130] > # minutes.
	I0610 11:19:04.297267   40730 command_runner.go:130] > # stream_tls_cert = ""
	I0610 11:19:04.297280   40730 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0610 11:19:04.297294   40730 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0610 11:19:04.297303   40730 command_runner.go:130] > # stream_tls_key = ""
	I0610 11:19:04.297313   40730 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0610 11:19:04.297321   40730 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0610 11:19:04.297338   40730 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0610 11:19:04.297345   40730 command_runner.go:130] > # stream_tls_ca = ""
	I0610 11:19:04.297352   40730 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0610 11:19:04.297359   40730 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0610 11:19:04.297366   40730 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0610 11:19:04.297373   40730 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0610 11:19:04.297379   40730 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0610 11:19:04.297387   40730 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0610 11:19:04.297394   40730 command_runner.go:130] > [crio.runtime]
	I0610 11:19:04.297400   40730 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0610 11:19:04.297408   40730 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0610 11:19:04.297415   40730 command_runner.go:130] > # "nofile=1024:2048"
	I0610 11:19:04.297421   40730 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0610 11:19:04.297427   40730 command_runner.go:130] > # default_ulimits = [
	I0610 11:19:04.297431   40730 command_runner.go:130] > # ]
	I0610 11:19:04.297436   40730 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0610 11:19:04.297443   40730 command_runner.go:130] > # no_pivot = false
	I0610 11:19:04.297448   40730 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0610 11:19:04.297456   40730 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0610 11:19:04.297463   40730 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0610 11:19:04.297470   40730 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0610 11:19:04.297477   40730 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0610 11:19:04.297483   40730 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0610 11:19:04.297490   40730 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0610 11:19:04.297494   40730 command_runner.go:130] > # Cgroup setting for conmon
	I0610 11:19:04.297501   40730 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0610 11:19:04.297507   40730 command_runner.go:130] > conmon_cgroup = "pod"
	I0610 11:19:04.297514   40730 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0610 11:19:04.297523   40730 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0610 11:19:04.297540   40730 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0610 11:19:04.297547   40730 command_runner.go:130] > conmon_env = [
	I0610 11:19:04.297553   40730 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0610 11:19:04.297558   40730 command_runner.go:130] > ]
	I0610 11:19:04.297563   40730 command_runner.go:130] > # Additional environment variables to set for all the
	I0610 11:19:04.297570   40730 command_runner.go:130] > # containers. These are overridden if set in the
	I0610 11:19:04.297576   40730 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0610 11:19:04.297583   40730 command_runner.go:130] > # default_env = [
	I0610 11:19:04.297590   40730 command_runner.go:130] > # ]
	I0610 11:19:04.297595   40730 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0610 11:19:04.297602   40730 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0610 11:19:04.297608   40730 command_runner.go:130] > # selinux = false
	I0610 11:19:04.297615   40730 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0610 11:19:04.297624   40730 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0610 11:19:04.297632   40730 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0610 11:19:04.297639   40730 command_runner.go:130] > # seccomp_profile = ""
	I0610 11:19:04.297645   40730 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0610 11:19:04.297654   40730 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0610 11:19:04.297662   40730 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0610 11:19:04.297669   40730 command_runner.go:130] > # which might increase security.
	I0610 11:19:04.297674   40730 command_runner.go:130] > # This option is currently deprecated,
	I0610 11:19:04.297682   40730 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0610 11:19:04.297689   40730 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0610 11:19:04.297695   40730 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0610 11:19:04.297703   40730 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0610 11:19:04.297711   40730 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0610 11:19:04.297719   40730 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0610 11:19:04.297726   40730 command_runner.go:130] > # This option supports live configuration reload.
	I0610 11:19:04.297730   40730 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0610 11:19:04.297738   40730 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0610 11:19:04.297745   40730 command_runner.go:130] > # the cgroup blockio controller.
	I0610 11:19:04.297749   40730 command_runner.go:130] > # blockio_config_file = ""
	I0610 11:19:04.297757   40730 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0610 11:19:04.297761   40730 command_runner.go:130] > # blockio parameters.
	I0610 11:19:04.297767   40730 command_runner.go:130] > # blockio_reload = false
	I0610 11:19:04.297774   40730 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0610 11:19:04.297780   40730 command_runner.go:130] > # irqbalance daemon.
	I0610 11:19:04.297785   40730 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0610 11:19:04.297795   40730 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0610 11:19:04.297805   40730 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0610 11:19:04.297813   40730 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0610 11:19:04.297821   40730 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0610 11:19:04.297829   40730 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0610 11:19:04.297834   40730 command_runner.go:130] > # This option supports live configuration reload.
	I0610 11:19:04.297841   40730 command_runner.go:130] > # rdt_config_file = ""
	I0610 11:19:04.297846   40730 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0610 11:19:04.297853   40730 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0610 11:19:04.297868   40730 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0610 11:19:04.297875   40730 command_runner.go:130] > # separate_pull_cgroup = ""
	I0610 11:19:04.297881   40730 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0610 11:19:04.297889   40730 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0610 11:19:04.297895   40730 command_runner.go:130] > # will be added.
	I0610 11:19:04.297899   40730 command_runner.go:130] > # default_capabilities = [
	I0610 11:19:04.297906   40730 command_runner.go:130] > # 	"CHOWN",
	I0610 11:19:04.297910   40730 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0610 11:19:04.297916   40730 command_runner.go:130] > # 	"FSETID",
	I0610 11:19:04.297920   40730 command_runner.go:130] > # 	"FOWNER",
	I0610 11:19:04.297924   40730 command_runner.go:130] > # 	"SETGID",
	I0610 11:19:04.297930   40730 command_runner.go:130] > # 	"SETUID",
	I0610 11:19:04.297934   40730 command_runner.go:130] > # 	"SETPCAP",
	I0610 11:19:04.297940   40730 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0610 11:19:04.297943   40730 command_runner.go:130] > # 	"KILL",
	I0610 11:19:04.297949   40730 command_runner.go:130] > # ]
	I0610 11:19:04.297956   40730 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0610 11:19:04.297965   40730 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0610 11:19:04.297971   40730 command_runner.go:130] > # add_inheritable_capabilities = false
	I0610 11:19:04.297979   40730 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0610 11:19:04.297989   40730 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0610 11:19:04.297995   40730 command_runner.go:130] > default_sysctls = [
	I0610 11:19:04.298000   40730 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0610 11:19:04.298003   40730 command_runner.go:130] > ]
	I0610 11:19:04.298010   40730 command_runner.go:130] > # List of devices on the host that a
	I0610 11:19:04.298016   40730 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0610 11:19:04.298023   40730 command_runner.go:130] > # allowed_devices = [
	I0610 11:19:04.298026   40730 command_runner.go:130] > # 	"/dev/fuse",
	I0610 11:19:04.298032   40730 command_runner.go:130] > # ]
	I0610 11:19:04.298036   40730 command_runner.go:130] > # List of additional devices. specified as
	I0610 11:19:04.298046   40730 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0610 11:19:04.298053   40730 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0610 11:19:04.298063   40730 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0610 11:19:04.298069   40730 command_runner.go:130] > # additional_devices = [
	I0610 11:19:04.298072   40730 command_runner.go:130] > # ]
	I0610 11:19:04.298080   40730 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0610 11:19:04.298084   40730 command_runner.go:130] > # cdi_spec_dirs = [
	I0610 11:19:04.298090   40730 command_runner.go:130] > # 	"/etc/cdi",
	I0610 11:19:04.298094   40730 command_runner.go:130] > # 	"/var/run/cdi",
	I0610 11:19:04.298099   40730 command_runner.go:130] > # ]
	I0610 11:19:04.298105   40730 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0610 11:19:04.298114   40730 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0610 11:19:04.298121   40730 command_runner.go:130] > # Defaults to false.
	I0610 11:19:04.298126   40730 command_runner.go:130] > # device_ownership_from_security_context = false
	I0610 11:19:04.298134   40730 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0610 11:19:04.298142   40730 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0610 11:19:04.298149   40730 command_runner.go:130] > # hooks_dir = [
	I0610 11:19:04.298153   40730 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0610 11:19:04.298158   40730 command_runner.go:130] > # ]
	I0610 11:19:04.298164   40730 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0610 11:19:04.298172   40730 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0610 11:19:04.298177   40730 command_runner.go:130] > # its default mounts from the following two files:
	I0610 11:19:04.298182   40730 command_runner.go:130] > #
	I0610 11:19:04.298188   40730 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0610 11:19:04.298197   40730 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0610 11:19:04.298204   40730 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0610 11:19:04.298207   40730 command_runner.go:130] > #
	I0610 11:19:04.298213   40730 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0610 11:19:04.298221   40730 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0610 11:19:04.298238   40730 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0610 11:19:04.298249   40730 command_runner.go:130] > #      only add mounts it finds in this file.
	I0610 11:19:04.298258   40730 command_runner.go:130] > #
	I0610 11:19:04.298264   40730 command_runner.go:130] > # default_mounts_file = ""
	I0610 11:19:04.298276   40730 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0610 11:19:04.298289   40730 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0610 11:19:04.298299   40730 command_runner.go:130] > pids_limit = 1024
	I0610 11:19:04.298309   40730 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0610 11:19:04.298317   40730 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0610 11:19:04.298326   40730 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0610 11:19:04.298336   40730 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0610 11:19:04.298342   40730 command_runner.go:130] > # log_size_max = -1
	I0610 11:19:04.298349   40730 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0610 11:19:04.298358   40730 command_runner.go:130] > # log_to_journald = false
	I0610 11:19:04.298366   40730 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0610 11:19:04.298374   40730 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0610 11:19:04.298382   40730 command_runner.go:130] > # Path to directory for container attach sockets.
	I0610 11:19:04.298387   40730 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0610 11:19:04.298395   40730 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0610 11:19:04.298400   40730 command_runner.go:130] > # bind_mount_prefix = ""
	I0610 11:19:04.298408   40730 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0610 11:19:04.298414   40730 command_runner.go:130] > # read_only = false
	I0610 11:19:04.298420   40730 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0610 11:19:04.298428   40730 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0610 11:19:04.298432   40730 command_runner.go:130] > # live configuration reload.
	I0610 11:19:04.298439   40730 command_runner.go:130] > # log_level = "info"
	I0610 11:19:04.298444   40730 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0610 11:19:04.298451   40730 command_runner.go:130] > # This option supports live configuration reload.
	I0610 11:19:04.298455   40730 command_runner.go:130] > # log_filter = ""
	I0610 11:19:04.298464   40730 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0610 11:19:04.298474   40730 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0610 11:19:04.298480   40730 command_runner.go:130] > # separated by comma.
	I0610 11:19:04.298488   40730 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0610 11:19:04.298494   40730 command_runner.go:130] > # uid_mappings = ""
	I0610 11:19:04.298502   40730 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0610 11:19:04.298510   40730 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0610 11:19:04.298516   40730 command_runner.go:130] > # separated by comma.
	I0610 11:19:04.298524   40730 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0610 11:19:04.298530   40730 command_runner.go:130] > # gid_mappings = ""
	I0610 11:19:04.298536   40730 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0610 11:19:04.298544   40730 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0610 11:19:04.298552   40730 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0610 11:19:04.298563   40730 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0610 11:19:04.298569   40730 command_runner.go:130] > # minimum_mappable_uid = -1
	I0610 11:19:04.298576   40730 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0610 11:19:04.298584   40730 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0610 11:19:04.298592   40730 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0610 11:19:04.298599   40730 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0610 11:19:04.298608   40730 command_runner.go:130] > # minimum_mappable_gid = -1
	I0610 11:19:04.298616   40730 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0610 11:19:04.298624   40730 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0610 11:19:04.298631   40730 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0610 11:19:04.298637   40730 command_runner.go:130] > # ctr_stop_timeout = 30
	I0610 11:19:04.298643   40730 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0610 11:19:04.298651   40730 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0610 11:19:04.298659   40730 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0610 11:19:04.298664   40730 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0610 11:19:04.298671   40730 command_runner.go:130] > drop_infra_ctr = false
	I0610 11:19:04.298677   40730 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0610 11:19:04.298684   40730 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0610 11:19:04.298694   40730 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0610 11:19:04.298700   40730 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0610 11:19:04.298707   40730 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0610 11:19:04.298715   40730 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0610 11:19:04.298721   40730 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0610 11:19:04.298728   40730 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0610 11:19:04.298732   40730 command_runner.go:130] > # shared_cpuset = ""
	I0610 11:19:04.298741   40730 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0610 11:19:04.298747   40730 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0610 11:19:04.298751   40730 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0610 11:19:04.298760   40730 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0610 11:19:04.298767   40730 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0610 11:19:04.298772   40730 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0610 11:19:04.298780   40730 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0610 11:19:04.298785   40730 command_runner.go:130] > # enable_criu_support = false
	I0610 11:19:04.298790   40730 command_runner.go:130] > # Enable/disable the generation of the container,
	I0610 11:19:04.298798   40730 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0610 11:19:04.298804   40730 command_runner.go:130] > # enable_pod_events = false
	I0610 11:19:04.298811   40730 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0610 11:19:04.298819   40730 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0610 11:19:04.298826   40730 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0610 11:19:04.298830   40730 command_runner.go:130] > # default_runtime = "runc"
	I0610 11:19:04.298838   40730 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0610 11:19:04.298845   40730 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0610 11:19:04.298856   40730 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0610 11:19:04.298866   40730 command_runner.go:130] > # creation as a file is not desired either.
	I0610 11:19:04.298876   40730 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0610 11:19:04.298883   40730 command_runner.go:130] > # the hostname is being managed dynamically.
	I0610 11:19:04.298887   40730 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0610 11:19:04.298893   40730 command_runner.go:130] > # ]
	I0610 11:19:04.298899   40730 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0610 11:19:04.298917   40730 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0610 11:19:04.298925   40730 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0610 11:19:04.298932   40730 command_runner.go:130] > # Each entry in the table should follow the format:
	I0610 11:19:04.298936   40730 command_runner.go:130] > #
	I0610 11:19:04.298944   40730 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0610 11:19:04.298948   40730 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0610 11:19:04.298969   40730 command_runner.go:130] > # runtime_type = "oci"
	I0610 11:19:04.298976   40730 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0610 11:19:04.298985   40730 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0610 11:19:04.298989   40730 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0610 11:19:04.298994   40730 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0610 11:19:04.298998   40730 command_runner.go:130] > # monitor_env = []
	I0610 11:19:04.299003   40730 command_runner.go:130] > # privileged_without_host_devices = false
	I0610 11:19:04.299009   40730 command_runner.go:130] > # allowed_annotations = []
	I0610 11:19:04.299015   40730 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0610 11:19:04.299020   40730 command_runner.go:130] > # Where:
	I0610 11:19:04.299026   40730 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0610 11:19:04.299034   40730 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0610 11:19:04.299041   40730 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0610 11:19:04.299049   40730 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0610 11:19:04.299056   40730 command_runner.go:130] > #   in $PATH.
	I0610 11:19:04.299062   40730 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0610 11:19:04.299069   40730 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0610 11:19:04.299074   40730 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0610 11:19:04.299080   40730 command_runner.go:130] > #   state.
	I0610 11:19:04.299086   40730 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0610 11:19:04.299094   40730 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0610 11:19:04.299102   40730 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0610 11:19:04.299110   40730 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0610 11:19:04.299115   40730 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0610 11:19:04.299124   40730 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0610 11:19:04.299133   40730 command_runner.go:130] > #   The currently recognized values are:
	I0610 11:19:04.299141   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0610 11:19:04.299151   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0610 11:19:04.299159   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0610 11:19:04.299165   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0610 11:19:04.299175   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0610 11:19:04.299184   40730 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0610 11:19:04.299193   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0610 11:19:04.299201   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0610 11:19:04.299209   40730 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0610 11:19:04.299218   40730 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0610 11:19:04.299229   40730 command_runner.go:130] > #   deprecated option "conmon".
	I0610 11:19:04.299243   40730 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0610 11:19:04.299254   40730 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0610 11:19:04.299268   40730 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0610 11:19:04.299279   40730 command_runner.go:130] > #   should be moved to the container's cgroup
	I0610 11:19:04.299292   40730 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0610 11:19:04.299302   40730 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0610 11:19:04.299311   40730 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0610 11:19:04.299318   40730 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0610 11:19:04.299322   40730 command_runner.go:130] > #
	I0610 11:19:04.299327   40730 command_runner.go:130] > # Using the seccomp notifier feature:
	I0610 11:19:04.299331   40730 command_runner.go:130] > #
	I0610 11:19:04.299337   40730 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0610 11:19:04.299346   40730 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0610 11:19:04.299351   40730 command_runner.go:130] > #
	I0610 11:19:04.299357   40730 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0610 11:19:04.299365   40730 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0610 11:19:04.299370   40730 command_runner.go:130] > #
	I0610 11:19:04.299376   40730 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0610 11:19:04.299380   40730 command_runner.go:130] > # feature.
	I0610 11:19:04.299384   40730 command_runner.go:130] > #
	I0610 11:19:04.299392   40730 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0610 11:19:04.299401   40730 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0610 11:19:04.299410   40730 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0610 11:19:04.299421   40730 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0610 11:19:04.299429   40730 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0610 11:19:04.299435   40730 command_runner.go:130] > #
	I0610 11:19:04.299441   40730 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0610 11:19:04.299450   40730 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0610 11:19:04.299456   40730 command_runner.go:130] > #
	I0610 11:19:04.299463   40730 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0610 11:19:04.299471   40730 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0610 11:19:04.299474   40730 command_runner.go:130] > #
	I0610 11:19:04.299482   40730 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0610 11:19:04.299491   40730 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0610 11:19:04.299497   40730 command_runner.go:130] > # limitation.
	I0610 11:19:04.299504   40730 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0610 11:19:04.299510   40730 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0610 11:19:04.299514   40730 command_runner.go:130] > runtime_type = "oci"
	I0610 11:19:04.299521   40730 command_runner.go:130] > runtime_root = "/run/runc"
	I0610 11:19:04.299526   40730 command_runner.go:130] > runtime_config_path = ""
	I0610 11:19:04.299533   40730 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0610 11:19:04.299537   40730 command_runner.go:130] > monitor_cgroup = "pod"
	I0610 11:19:04.299543   40730 command_runner.go:130] > monitor_exec_cgroup = ""
	I0610 11:19:04.299547   40730 command_runner.go:130] > monitor_env = [
	I0610 11:19:04.299554   40730 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0610 11:19:04.299561   40730 command_runner.go:130] > ]
	I0610 11:19:04.299565   40730 command_runner.go:130] > privileged_without_host_devices = false
	I0610 11:19:04.299574   40730 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0610 11:19:04.299581   40730 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0610 11:19:04.299588   40730 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0610 11:19:04.299597   40730 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0610 11:19:04.299607   40730 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0610 11:19:04.299615   40730 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0610 11:19:04.299626   40730 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0610 11:19:04.299636   40730 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0610 11:19:04.299641   40730 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0610 11:19:04.299647   40730 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0610 11:19:04.299650   40730 command_runner.go:130] > # Example:
	I0610 11:19:04.299655   40730 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0610 11:19:04.299659   40730 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0610 11:19:04.299666   40730 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0610 11:19:04.299671   40730 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0610 11:19:04.299674   40730 command_runner.go:130] > # cpuset = 0
	I0610 11:19:04.299678   40730 command_runner.go:130] > # cpushares = "0-1"
	I0610 11:19:04.299681   40730 command_runner.go:130] > # Where:
	I0610 11:19:04.299686   40730 command_runner.go:130] > # The workload name is workload-type.
	I0610 11:19:04.299692   40730 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0610 11:19:04.299697   40730 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0610 11:19:04.299702   40730 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0610 11:19:04.299709   40730 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0610 11:19:04.299715   40730 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0610 11:19:04.299719   40730 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0610 11:19:04.299725   40730 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0610 11:19:04.299729   40730 command_runner.go:130] > # Default value is set to true
	I0610 11:19:04.299733   40730 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0610 11:19:04.299738   40730 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0610 11:19:04.299742   40730 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0610 11:19:04.299746   40730 command_runner.go:130] > # Default value is set to 'false'
	I0610 11:19:04.299750   40730 command_runner.go:130] > # disable_hostport_mapping = false
	I0610 11:19:04.299756   40730 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0610 11:19:04.299758   40730 command_runner.go:130] > #
	I0610 11:19:04.299763   40730 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0610 11:19:04.299769   40730 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0610 11:19:04.299775   40730 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0610 11:19:04.299781   40730 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0610 11:19:04.299786   40730 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0610 11:19:04.299790   40730 command_runner.go:130] > [crio.image]
	I0610 11:19:04.299795   40730 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0610 11:19:04.299799   40730 command_runner.go:130] > # default_transport = "docker://"
	I0610 11:19:04.299808   40730 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0610 11:19:04.299814   40730 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0610 11:19:04.299820   40730 command_runner.go:130] > # global_auth_file = ""
	I0610 11:19:04.299825   40730 command_runner.go:130] > # The image used to instantiate infra containers.
	I0610 11:19:04.299833   40730 command_runner.go:130] > # This option supports live configuration reload.
	I0610 11:19:04.299837   40730 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0610 11:19:04.299846   40730 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0610 11:19:04.299854   40730 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0610 11:19:04.299860   40730 command_runner.go:130] > # This option supports live configuration reload.
	I0610 11:19:04.299869   40730 command_runner.go:130] > # pause_image_auth_file = ""
	I0610 11:19:04.299877   40730 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0610 11:19:04.299885   40730 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0610 11:19:04.299894   40730 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0610 11:19:04.299900   40730 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0610 11:19:04.299907   40730 command_runner.go:130] > # pause_command = "/pause"
	I0610 11:19:04.299913   40730 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0610 11:19:04.299921   40730 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0610 11:19:04.299929   40730 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0610 11:19:04.299938   40730 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0610 11:19:04.299946   40730 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0610 11:19:04.299952   40730 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0610 11:19:04.299958   40730 command_runner.go:130] > # pinned_images = [
	I0610 11:19:04.299961   40730 command_runner.go:130] > # ]
	I0610 11:19:04.299969   40730 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0610 11:19:04.299980   40730 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0610 11:19:04.299988   40730 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0610 11:19:04.299997   40730 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0610 11:19:04.300002   40730 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0610 11:19:04.300006   40730 command_runner.go:130] > # signature_policy = ""
	I0610 11:19:04.300012   40730 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0610 11:19:04.300020   40730 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0610 11:19:04.300027   40730 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0610 11:19:04.300035   40730 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0610 11:19:04.300043   40730 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0610 11:19:04.300049   40730 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0610 11:19:04.300055   40730 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0610 11:19:04.300064   40730 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0610 11:19:04.300070   40730 command_runner.go:130] > # changing them here.
	I0610 11:19:04.300074   40730 command_runner.go:130] > # insecure_registries = [
	I0610 11:19:04.300078   40730 command_runner.go:130] > # ]
	I0610 11:19:04.300084   40730 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0610 11:19:04.300091   40730 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0610 11:19:04.300095   40730 command_runner.go:130] > # image_volumes = "mkdir"
	I0610 11:19:04.300103   40730 command_runner.go:130] > # Temporary directory to use for storing big files
	I0610 11:19:04.300110   40730 command_runner.go:130] > # big_files_temporary_dir = ""
	I0610 11:19:04.300119   40730 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0610 11:19:04.300125   40730 command_runner.go:130] > # CNI plugins.
	I0610 11:19:04.300128   40730 command_runner.go:130] > [crio.network]
	I0610 11:19:04.300137   40730 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0610 11:19:04.300145   40730 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0610 11:19:04.300149   40730 command_runner.go:130] > # cni_default_network = ""
	I0610 11:19:04.300157   40730 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0610 11:19:04.300162   40730 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0610 11:19:04.300169   40730 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0610 11:19:04.300175   40730 command_runner.go:130] > # plugin_dirs = [
	I0610 11:19:04.300178   40730 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0610 11:19:04.300182   40730 command_runner.go:130] > # ]
	I0610 11:19:04.300188   40730 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0610 11:19:04.300194   40730 command_runner.go:130] > [crio.metrics]
	I0610 11:19:04.300199   40730 command_runner.go:130] > # Globally enable or disable metrics support.
	I0610 11:19:04.300205   40730 command_runner.go:130] > enable_metrics = true
	I0610 11:19:04.300210   40730 command_runner.go:130] > # Specify enabled metrics collectors.
	I0610 11:19:04.300217   40730 command_runner.go:130] > # Per default all metrics are enabled.
	I0610 11:19:04.300223   40730 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0610 11:19:04.300237   40730 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0610 11:19:04.300250   40730 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0610 11:19:04.300259   40730 command_runner.go:130] > # metrics_collectors = [
	I0610 11:19:04.300268   40730 command_runner.go:130] > # 	"operations",
	I0610 11:19:04.300279   40730 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0610 11:19:04.300290   40730 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0610 11:19:04.300300   40730 command_runner.go:130] > # 	"operations_errors",
	I0610 11:19:04.300309   40730 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0610 11:19:04.300318   40730 command_runner.go:130] > # 	"image_pulls_by_name",
	I0610 11:19:04.300327   40730 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0610 11:19:04.300337   40730 command_runner.go:130] > # 	"image_pulls_failures",
	I0610 11:19:04.300347   40730 command_runner.go:130] > # 	"image_pulls_successes",
	I0610 11:19:04.300357   40730 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0610 11:19:04.300365   40730 command_runner.go:130] > # 	"image_layer_reuse",
	I0610 11:19:04.300372   40730 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0610 11:19:04.300377   40730 command_runner.go:130] > # 	"containers_oom_total",
	I0610 11:19:04.300383   40730 command_runner.go:130] > # 	"containers_oom",
	I0610 11:19:04.300387   40730 command_runner.go:130] > # 	"processes_defunct",
	I0610 11:19:04.300393   40730 command_runner.go:130] > # 	"operations_total",
	I0610 11:19:04.300398   40730 command_runner.go:130] > # 	"operations_latency_seconds",
	I0610 11:19:04.300404   40730 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0610 11:19:04.300411   40730 command_runner.go:130] > # 	"operations_errors_total",
	I0610 11:19:04.300415   40730 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0610 11:19:04.300422   40730 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0610 11:19:04.300426   40730 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0610 11:19:04.300431   40730 command_runner.go:130] > # 	"image_pulls_success_total",
	I0610 11:19:04.300441   40730 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0610 11:19:04.300447   40730 command_runner.go:130] > # 	"containers_oom_count_total",
	I0610 11:19:04.300453   40730 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0610 11:19:04.300459   40730 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0610 11:19:04.300463   40730 command_runner.go:130] > # ]
	I0610 11:19:04.300468   40730 command_runner.go:130] > # The port on which the metrics server will listen.
	I0610 11:19:04.300474   40730 command_runner.go:130] > # metrics_port = 9090
	I0610 11:19:04.300480   40730 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0610 11:19:04.300487   40730 command_runner.go:130] > # metrics_socket = ""
	I0610 11:19:04.300492   40730 command_runner.go:130] > # The certificate for the secure metrics server.
	I0610 11:19:04.300500   40730 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0610 11:19:04.300509   40730 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0610 11:19:04.300514   40730 command_runner.go:130] > # certificate on any modification event.
	I0610 11:19:04.300519   40730 command_runner.go:130] > # metrics_cert = ""
	I0610 11:19:04.300524   40730 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0610 11:19:04.300531   40730 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0610 11:19:04.300535   40730 command_runner.go:130] > # metrics_key = ""
	I0610 11:19:04.300604   40730 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0610 11:19:04.300608   40730 command_runner.go:130] > [crio.tracing]
	I0610 11:19:04.300613   40730 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0610 11:19:04.300617   40730 command_runner.go:130] > # enable_tracing = false
	I0610 11:19:04.300622   40730 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0610 11:19:04.300629   40730 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0610 11:19:04.300637   40730 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0610 11:19:04.300644   40730 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0610 11:19:04.300648   40730 command_runner.go:130] > # CRI-O NRI configuration.
	I0610 11:19:04.300654   40730 command_runner.go:130] > [crio.nri]
	I0610 11:19:04.300658   40730 command_runner.go:130] > # Globally enable or disable NRI.
	I0610 11:19:04.300664   40730 command_runner.go:130] > # enable_nri = false
	I0610 11:19:04.300668   40730 command_runner.go:130] > # NRI socket to listen on.
	I0610 11:19:04.300677   40730 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0610 11:19:04.300684   40730 command_runner.go:130] > # NRI plugin directory to use.
	I0610 11:19:04.300689   40730 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0610 11:19:04.300697   40730 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0610 11:19:04.300705   40730 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0610 11:19:04.300710   40730 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0610 11:19:04.300717   40730 command_runner.go:130] > # nri_disable_connections = false
	I0610 11:19:04.300722   40730 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0610 11:19:04.300729   40730 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0610 11:19:04.300734   40730 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0610 11:19:04.300741   40730 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0610 11:19:04.300747   40730 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0610 11:19:04.300753   40730 command_runner.go:130] > [crio.stats]
	I0610 11:19:04.300761   40730 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0610 11:19:04.300769   40730 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0610 11:19:04.300774   40730 command_runner.go:130] > # stats_collection_period = 0
	I0610 11:19:04.300874   40730 cni.go:84] Creating CNI manager for ""
	I0610 11:19:04.300882   40730 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 11:19:04.300890   40730 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:19:04.300910   40730 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-862380 NodeName:multinode-862380 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 11:19:04.301050   40730 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-862380"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:19:04.301114   40730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:19:04.311624   40730 command_runner.go:130] > kubeadm
	I0610 11:19:04.311646   40730 command_runner.go:130] > kubectl
	I0610 11:19:04.311650   40730 command_runner.go:130] > kubelet
	I0610 11:19:04.311670   40730 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:19:04.311716   40730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 11:19:04.320820   40730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0610 11:19:04.336935   40730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:19:04.352581   40730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0610 11:19:04.368500   40730 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I0610 11:19:04.372243   40730 command_runner.go:130] > 192.168.39.100	control-plane.minikube.internal
	I0610 11:19:04.372319   40730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:19:04.514176   40730 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:19:04.528402   40730 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380 for IP: 192.168.39.100
	I0610 11:19:04.528426   40730 certs.go:194] generating shared ca certs ...
	I0610 11:19:04.528446   40730 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:19:04.528641   40730 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 11:19:04.528684   40730 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 11:19:04.528694   40730 certs.go:256] generating profile certs ...
	I0610 11:19:04.528831   40730 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/client.key
	I0610 11:19:04.528912   40730 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/apiserver.key.a2475a71
	I0610 11:19:04.529014   40730 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/proxy-client.key
	I0610 11:19:04.529029   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 11:19:04.529052   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 11:19:04.529071   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 11:19:04.529088   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 11:19:04.529104   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 11:19:04.529122   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 11:19:04.529138   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 11:19:04.529156   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 11:19:04.529232   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 11:19:04.529273   40730 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 11:19:04.529286   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 11:19:04.529315   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 11:19:04.529346   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 11:19:04.529380   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 11:19:04.529430   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:19:04.529467   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:19:04.529487   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 11:19:04.529504   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 11:19:04.530151   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:19:04.554473   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:19:04.577097   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:19:04.599483   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 11:19:04.622193   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 11:19:04.644912   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 11:19:04.668405   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:19:04.691454   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 11:19:04.714875   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:19:04.738286   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 11:19:04.762284   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 11:19:04.785548   40730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:19:04.801895   40730 ssh_runner.go:195] Run: openssl version
	I0610 11:19:04.807696   40730 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 11:19:04.807844   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:19:04.819442   40730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:19:04.823982   40730 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:19:04.824009   40730 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:19:04.824055   40730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:19:04.829638   40730 command_runner.go:130] > b5213941
	I0610 11:19:04.829736   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:19:04.838894   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 11:19:04.849775   40730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 11:19:04.854638   40730 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:19:04.854691   40730 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:19:04.854739   40730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 11:19:04.860242   40730 command_runner.go:130] > 51391683
	I0610 11:19:04.860303   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 11:19:04.869487   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 11:19:04.879854   40730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 11:19:04.884054   40730 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:19:04.884087   40730 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:19:04.884135   40730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 11:19:04.889287   40730 command_runner.go:130] > 3ec20f2e
	I0610 11:19:04.889411   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:19:04.898381   40730 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:19:04.902534   40730 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:19:04.902553   40730 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0610 11:19:04.902559   40730 command_runner.go:130] > Device: 253,1	Inode: 7339542     Links: 1
	I0610 11:19:04.902565   40730 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 11:19:04.902571   40730 command_runner.go:130] > Access: 2024-06-10 11:12:55.236834982 +0000
	I0610 11:19:04.902577   40730 command_runner.go:130] > Modify: 2024-06-10 11:12:55.236834982 +0000
	I0610 11:19:04.902582   40730 command_runner.go:130] > Change: 2024-06-10 11:12:55.236834982 +0000
	I0610 11:19:04.902593   40730 command_runner.go:130] >  Birth: 2024-06-10 11:12:55.236834982 +0000
	I0610 11:19:04.902697   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 11:19:04.908012   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.908080   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 11:19:04.919510   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.919571   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 11:19:04.932368   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.932738   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 11:19:04.940387   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.940452   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 11:19:04.945839   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.945906   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 11:19:04.951354   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.951493   40730 kubeadm.go:391] StartCluster: {Name:multinode-862380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-862380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:19:04.951640   40730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 11:19:04.951709   40730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:19:04.993813   40730 command_runner.go:130] > e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90
	I0610 11:19:04.993846   40730 command_runner.go:130] > b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f
	I0610 11:19:04.993856   40730 command_runner.go:130] > f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc
	I0610 11:19:04.993866   40730 command_runner.go:130] > d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c
	I0610 11:19:04.993876   40730 command_runner.go:130] > 9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f
	I0610 11:19:04.993885   40730 command_runner.go:130] > e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976
	I0610 11:19:04.993894   40730 command_runner.go:130] > 58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266
	I0610 11:19:04.993904   40730 command_runner.go:130] > 4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb
	I0610 11:19:04.993935   40730 cri.go:89] found id: "e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90"
	I0610 11:19:04.993947   40730 cri.go:89] found id: "b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f"
	I0610 11:19:04.993953   40730 cri.go:89] found id: "f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc"
	I0610 11:19:04.993960   40730 cri.go:89] found id: "d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c"
	I0610 11:19:04.993965   40730 cri.go:89] found id: "9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f"
	I0610 11:19:04.993972   40730 cri.go:89] found id: "e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976"
	I0610 11:19:04.993976   40730 cri.go:89] found id: "58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266"
	I0610 11:19:04.993981   40730 cri.go:89] found id: "4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb"
	I0610 11:19:04.993986   40730 cri.go:89] found id: ""
	I0610 11:19:04.994036   40730 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.000413906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f1f7f63-da1f-4660-8a31-5b89d684beec name=/runtime.v1.RuntimeService/Version
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.001203799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7307979d-b205-46ce-a5cc-c53f8f0df228 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.001681743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718018429001654668,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7307979d-b205-46ce-a5cc-c53f8f0df228 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.002114658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df1f2469-ba30-4dec-8e50-3f1d4b560051 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.002166446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df1f2469-ba30-4dec-8e50-3f1d4b560051 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.002523388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fc79e4de71d445e66c76ebc879593d2599c2c77229107f2a96a78737d49d6e,PodSandboxId:1daffe5524d188139839a6b1b96ad5ca5edfb98a6eff8bb442212a5c47d51c59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718018385981027011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d239e403b99cbc595846d609ca3877c0378cd522cc51a4ef8e62481693d5022,PodSandboxId:fe929d942cc9e63e145c553e0aa9f5268b3af05b033b39c69c2f4bf196375602,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718018352558458161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8e67a6d2840c27a7e9918a80a0a0c785dc7b6d2bd90a358d542bc6a1aabe74,PodSandboxId:abfa9aa50974623da5a50a69184494c217cf08dbc6007db84d76e812590ddb52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718018352479657098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fe
db5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55fbec1ed1f5c35125219a44fd079a722d49d9d8cbdb2455f8a70f01da71ed4e,PodSandboxId:4517c9efbd8541d8d1d37f445a576a5f35bb0182780f23bc213b682f1e16ae21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718018352360363094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]
string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a3bf0e596a0cca6f9831fcb9b458d5e853147197c42b8d6060f07e94f173f5,PodSandboxId:cdcc0f30f293274460a437197df073c4e406ed920aab513665fb6c4a8b4d8b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718018352319151234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e188af5e7614aace4ffe7147aadf26b4ae34f2212f99727a96e4a432272564dc,PodSandboxId:87bf102e2b2943355dabc72d3e0980da5c49276950d1ad4b2fc9c2f1f768e8e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718018347445798005,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dceb31898cc620cff1b69f4b915cc293db2955ad4fdfa09aaf24f4ba57bde1,PodSandboxId:dcd8d5c9c8cc1d7f6550cc6d27b429fa8028411f6868b679a6883186ce6898e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718018347411124949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5
e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50310344784fc7f085c0a0d226fde85f9b838c4bcfeaafbde1cf90adf4432aee,PodSandboxId:1a238893e319e44879cd357493747cefc3bd8860f007d2383c98f0d686678db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718018347413342565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5971ca1a108b34acbf6ae63f70db7b15d696e6cd577d1f3356a2b6661bb028d8,PodSandboxId:0b2ba625d3d8f5417652f5e20ac755f7fd3a72975d10e8ac6dd75ff553730dae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718018347339762997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc5324b6db46ad8a78594835c98c73f0f42d1c87636abde9b15fb4cbd4d2151,PodSandboxId:cfbc0a4db39045ee382b6a54d8d5f5da4410877bfde75f2ee86af08cede879e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718018047623152578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90,PodSandboxId:024549fd085df2c3f26e3b57056e36220f606174179776d0ec5517d7ab213ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718018002906701577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f,PodSandboxId:41beb7220db38d30d9a9e09ec9c7a266465505827ab8beb5023e3e210a3baa7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718018002842630237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc,PodSandboxId:47791e1db12ccb5a3125bf15245a19e55a3ce586fd87ad323ea1f816731386b1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718018001431173205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c,PodSandboxId:a2c6585397cfe84addb16de8bb37037463d7253e6320d81daa859502341f8f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718017997985587185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976,PodSandboxId:c88f109c2c83a6337b70493edeaa6bdda09624f9dbef45778d2ef091c19aeac1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718017978705276425,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702
,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f,PodSandboxId:dc44bfa9ee46200e44408345aa810713cfebf553e56e6a32f65ec6bd305edeb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718017978724535495,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:
map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266,PodSandboxId:10c8e06b75105c6690ee540a76a09dcc7cc12fcbdf5b36d4eb25ead4778cc4c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017978654023906,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb,PodSandboxId:04f27b50f52704344dd889054f4cf6da33cebd323a5db935ef89eb4abe78ffe8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017978635289553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df1f2469-ba30-4dec-8e50-3f1d4b560051 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.002980900Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=dab242ba-b5d2-4c94-bbf4-3ac8bac1a9cc name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.003298451Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1daffe5524d188139839a6b1b96ad5ca5edfb98a6eff8bb442212a5c47d51c59,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-jx8f9,Uid:237e1205-8c4b-4234-ad0f-80e35f097827,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018385854834505,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:19:11.726779473Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe929d942cc9e63e145c553e0aa9f5268b3af05b033b39c69c2f4bf196375602,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-vfxw9,Uid:56f70aa4-9ef6-4257-86b3-4fd0968b2e37,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1718018352136052520,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:19:11.726791174Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4517c9efbd8541d8d1d37f445a576a5f35bb0182780f23bc213b682f1e16ae21,Metadata:&PodSandboxMetadata{Name:kube-proxy-gghfj,Uid:d6793da8-f52b-488b-a0ec-88cbf6460c13,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018352110718304,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{
kubernetes.io/config.seen: 2024-06-10T11:19:11.726796187Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:abfa9aa50974623da5a50a69184494c217cf08dbc6007db84d76e812590ddb52,Metadata:&PodSandboxMetadata{Name:kindnet-bnpjz,Uid:6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018352080293053,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:19:11.726785462Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cdcc0f30f293274460a437197df073c4e406ed920aab513665fb6c4a8b4d8b15,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7966a309-dca2-488e-b683-0ff37fa01fe3,Namespace:kube-system,Attempt:1,},State
:SANDBOX_READY,CreatedAt:1718018352064889047,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp
\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-10T11:19:11.726789870Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:87bf102e2b2943355dabc72d3e0980da5c49276950d1ad4b2fc9c2f1f768e8e0,Metadata:&PodSandboxMetadata{Name:etcd-multinode-862380,Uid:134cbc49aee8e613a34fe93b9347c702,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018347207724809,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.100:2379,kubernetes.io/config.hash: 134cbc49aee8e613a34fe93b9347c702,kubernetes.io/config.seen: 2024-06-10T11:19:06.739127245Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dcd8d5c9c8cc1d7f6550cc6d27b429fa8028411f6868b679a6883186ce6898e2,Metada
ta:&PodSandboxMetadata{Name:kube-controller-manager-multinode-862380,Uid:0f4531b47a5c5353a3b6d9c833bc5c53,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018347201521896,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0f4531b47a5c5353a3b6d9c833bc5c53,kubernetes.io/config.seen: 2024-06-10T11:19:06.739131912Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a238893e319e44879cd357493747cefc3bd8860f007d2383c98f0d686678db0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-862380,Uid:8d5215e23358f00a13bf40785087f55d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018347194923463,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io
.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8d5215e23358f00a13bf40785087f55d,kubernetes.io/config.seen: 2024-06-10T11:19:06.739132935Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b2ba625d3d8f5417652f5e20ac755f7fd3a72975d10e8ac6dd75ff553730dae,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-862380,Uid:403c273aa070281af0f1949448b47864,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018347194311848,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.100:8443,kuberne
tes.io/config.hash: 403c273aa070281af0f1949448b47864,kubernetes.io/config.seen: 2024-06-10T11:19:06.739130582Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cfbc0a4db39045ee382b6a54d8d5f5da4410877bfde75f2ee86af08cede879e0,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-jx8f9,Uid:237e1205-8c4b-4234-ad0f-80e35f097827,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718018045121588532,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:14:04.808575248Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:41beb7220db38d30d9a9e09ec9c7a266465505827ab8beb5023e3e210a3baa7b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7966a309-dca2-488e-b683-0ff37fa01fe3,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1718018002713151508,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-10T11:13:22.404497399Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:024549fd085df2c3f26e3b57056e36220f606174179776d0ec5517d7ab213ed2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-vfxw9,Uid:56f70aa4-9ef6-4257-86b3-4fd0968b2e37,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718018002703520957,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:13:22.396356706Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47791e1db12ccb5a3125bf15245a19e55a3ce586fd87ad323ea1f816731386b1,Metadata:&PodSandboxMetadata{Name:kindnet-bnpjz,Uid:6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017997735974005,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:13:17.417649738Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2c6585397cfe84addb16de8bb37037463d7253e6320d81daa859502341f8f85,Metadata:&PodSandboxMetadata{Name:kube-proxy-gghfj,Uid:d6793da8-f52b-488b-a0ec-88cbf6460c13,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017997735425640,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,k8s-app: kub
e-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:13:17.421940696Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:10c8e06b75105c6690ee540a76a09dcc7cc12fcbdf5b36d4eb25ead4778cc4c1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-862380,Uid:403c273aa070281af0f1949448b47864,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017978495105745,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.100:8443,kubernetes.io/config.hash: 403c273aa070281af0f1949448b47864,kubernetes.io/config.seen: 2024-06-10T11:12:58.021247059Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:04f27b50f52704
344dd889054f4cf6da33cebd323a5db935ef89eb4abe78ffe8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-862380,Uid:0f4531b47a5c5353a3b6d9c833bc5c53,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017978491863068,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0f4531b47a5c5353a3b6d9c833bc5c53,kubernetes.io/config.seen: 2024-06-10T11:12:58.021248311Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c88f109c2c83a6337b70493edeaa6bdda09624f9dbef45778d2ef091c19aeac1,Metadata:&PodSandboxMetadata{Name:etcd-multinode-862380,Uid:134cbc49aee8e613a34fe93b9347c702,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017978489978175,Labels:map[string]string{component
: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.100:2379,kubernetes.io/config.hash: 134cbc49aee8e613a34fe93b9347c702,kubernetes.io/config.seen: 2024-06-10T11:12:58.021242290Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dc44bfa9ee46200e44408345aa810713cfebf553e56e6a32f65ec6bd305edeb0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-862380,Uid:8d5215e23358f00a13bf40785087f55d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017978472157614,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,tier: control-plane,},Annotati
ons:map[string]string{kubernetes.io/config.hash: 8d5215e23358f00a13bf40785087f55d,kubernetes.io/config.seen: 2024-06-10T11:12:58.021249617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=dab242ba-b5d2-4c94-bbf4-3ac8bac1a9cc name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.004056087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07c845fb-fb9b-4e5e-ae57-d823415a4a61 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.004104674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07c845fb-fb9b-4e5e-ae57-d823415a4a61 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.004475765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fc79e4de71d445e66c76ebc879593d2599c2c77229107f2a96a78737d49d6e,PodSandboxId:1daffe5524d188139839a6b1b96ad5ca5edfb98a6eff8bb442212a5c47d51c59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718018385981027011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d239e403b99cbc595846d609ca3877c0378cd522cc51a4ef8e62481693d5022,PodSandboxId:fe929d942cc9e63e145c553e0aa9f5268b3af05b033b39c69c2f4bf196375602,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718018352558458161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8e67a6d2840c27a7e9918a80a0a0c785dc7b6d2bd90a358d542bc6a1aabe74,PodSandboxId:abfa9aa50974623da5a50a69184494c217cf08dbc6007db84d76e812590ddb52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718018352479657098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fe
db5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55fbec1ed1f5c35125219a44fd079a722d49d9d8cbdb2455f8a70f01da71ed4e,PodSandboxId:4517c9efbd8541d8d1d37f445a576a5f35bb0182780f23bc213b682f1e16ae21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718018352360363094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]
string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a3bf0e596a0cca6f9831fcb9b458d5e853147197c42b8d6060f07e94f173f5,PodSandboxId:cdcc0f30f293274460a437197df073c4e406ed920aab513665fb6c4a8b4d8b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718018352319151234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e188af5e7614aace4ffe7147aadf26b4ae34f2212f99727a96e4a432272564dc,PodSandboxId:87bf102e2b2943355dabc72d3e0980da5c49276950d1ad4b2fc9c2f1f768e8e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718018347445798005,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dceb31898cc620cff1b69f4b915cc293db2955ad4fdfa09aaf24f4ba57bde1,PodSandboxId:dcd8d5c9c8cc1d7f6550cc6d27b429fa8028411f6868b679a6883186ce6898e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718018347411124949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5
e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50310344784fc7f085c0a0d226fde85f9b838c4bcfeaafbde1cf90adf4432aee,PodSandboxId:1a238893e319e44879cd357493747cefc3bd8860f007d2383c98f0d686678db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718018347413342565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5971ca1a108b34acbf6ae63f70db7b15d696e6cd577d1f3356a2b6661bb028d8,PodSandboxId:0b2ba625d3d8f5417652f5e20ac755f7fd3a72975d10e8ac6dd75ff553730dae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718018347339762997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc5324b6db46ad8a78594835c98c73f0f42d1c87636abde9b15fb4cbd4d2151,PodSandboxId:cfbc0a4db39045ee382b6a54d8d5f5da4410877bfde75f2ee86af08cede879e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718018047623152578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90,PodSandboxId:024549fd085df2c3f26e3b57056e36220f606174179776d0ec5517d7ab213ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718018002906701577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f,PodSandboxId:41beb7220db38d30d9a9e09ec9c7a266465505827ab8beb5023e3e210a3baa7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718018002842630237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc,PodSandboxId:47791e1db12ccb5a3125bf15245a19e55a3ce586fd87ad323ea1f816731386b1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718018001431173205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c,PodSandboxId:a2c6585397cfe84addb16de8bb37037463d7253e6320d81daa859502341f8f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718017997985587185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976,PodSandboxId:c88f109c2c83a6337b70493edeaa6bdda09624f9dbef45778d2ef091c19aeac1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718017978705276425,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702
,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f,PodSandboxId:dc44bfa9ee46200e44408345aa810713cfebf553e56e6a32f65ec6bd305edeb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718017978724535495,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:
map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266,PodSandboxId:10c8e06b75105c6690ee540a76a09dcc7cc12fcbdf5b36d4eb25ead4778cc4c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017978654023906,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb,PodSandboxId:04f27b50f52704344dd889054f4cf6da33cebd323a5db935ef89eb4abe78ffe8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017978635289553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07c845fb-fb9b-4e5e-ae57-d823415a4a61 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.043177294Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb758d5e-f788-4136-9dfe-9077c49a9b59 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.043249023Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb758d5e-f788-4136-9dfe-9077c49a9b59 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.044132525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=075e541e-b85b-48d0-a846-0e517aac1525 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.044523119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718018429044500924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=075e541e-b85b-48d0-a846-0e517aac1525 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.045062941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb770cc9-7ea9-401f-9365-d746d50d4ce5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.045120938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb770cc9-7ea9-401f-9365-d746d50d4ce5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.045460698Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fc79e4de71d445e66c76ebc879593d2599c2c77229107f2a96a78737d49d6e,PodSandboxId:1daffe5524d188139839a6b1b96ad5ca5edfb98a6eff8bb442212a5c47d51c59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718018385981027011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d239e403b99cbc595846d609ca3877c0378cd522cc51a4ef8e62481693d5022,PodSandboxId:fe929d942cc9e63e145c553e0aa9f5268b3af05b033b39c69c2f4bf196375602,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718018352558458161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8e67a6d2840c27a7e9918a80a0a0c785dc7b6d2bd90a358d542bc6a1aabe74,PodSandboxId:abfa9aa50974623da5a50a69184494c217cf08dbc6007db84d76e812590ddb52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718018352479657098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fe
db5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55fbec1ed1f5c35125219a44fd079a722d49d9d8cbdb2455f8a70f01da71ed4e,PodSandboxId:4517c9efbd8541d8d1d37f445a576a5f35bb0182780f23bc213b682f1e16ae21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718018352360363094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]
string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a3bf0e596a0cca6f9831fcb9b458d5e853147197c42b8d6060f07e94f173f5,PodSandboxId:cdcc0f30f293274460a437197df073c4e406ed920aab513665fb6c4a8b4d8b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718018352319151234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e188af5e7614aace4ffe7147aadf26b4ae34f2212f99727a96e4a432272564dc,PodSandboxId:87bf102e2b2943355dabc72d3e0980da5c49276950d1ad4b2fc9c2f1f768e8e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718018347445798005,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dceb31898cc620cff1b69f4b915cc293db2955ad4fdfa09aaf24f4ba57bde1,PodSandboxId:dcd8d5c9c8cc1d7f6550cc6d27b429fa8028411f6868b679a6883186ce6898e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718018347411124949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5
e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50310344784fc7f085c0a0d226fde85f9b838c4bcfeaafbde1cf90adf4432aee,PodSandboxId:1a238893e319e44879cd357493747cefc3bd8860f007d2383c98f0d686678db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718018347413342565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5971ca1a108b34acbf6ae63f70db7b15d696e6cd577d1f3356a2b6661bb028d8,PodSandboxId:0b2ba625d3d8f5417652f5e20ac755f7fd3a72975d10e8ac6dd75ff553730dae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718018347339762997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc5324b6db46ad8a78594835c98c73f0f42d1c87636abde9b15fb4cbd4d2151,PodSandboxId:cfbc0a4db39045ee382b6a54d8d5f5da4410877bfde75f2ee86af08cede879e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718018047623152578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90,PodSandboxId:024549fd085df2c3f26e3b57056e36220f606174179776d0ec5517d7ab213ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718018002906701577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f,PodSandboxId:41beb7220db38d30d9a9e09ec9c7a266465505827ab8beb5023e3e210a3baa7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718018002842630237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc,PodSandboxId:47791e1db12ccb5a3125bf15245a19e55a3ce586fd87ad323ea1f816731386b1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718018001431173205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c,PodSandboxId:a2c6585397cfe84addb16de8bb37037463d7253e6320d81daa859502341f8f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718017997985587185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976,PodSandboxId:c88f109c2c83a6337b70493edeaa6bdda09624f9dbef45778d2ef091c19aeac1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718017978705276425,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702
,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f,PodSandboxId:dc44bfa9ee46200e44408345aa810713cfebf553e56e6a32f65ec6bd305edeb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718017978724535495,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:
map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266,PodSandboxId:10c8e06b75105c6690ee540a76a09dcc7cc12fcbdf5b36d4eb25ead4778cc4c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017978654023906,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb,PodSandboxId:04f27b50f52704344dd889054f4cf6da33cebd323a5db935ef89eb4abe78ffe8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017978635289553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb770cc9-7ea9-401f-9365-d746d50d4ce5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.094428285Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f45634e1-327a-4533-8450-effc4e4503b2 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.094501389Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f45634e1-327a-4533-8450-effc4e4503b2 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.095570188Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0cec927c-6dc2-4f43-b0cd-55d562143087 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.096078462Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718018429096055392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0cec927c-6dc2-4f43-b0cd-55d562143087 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.096730827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cd9a7f2-0914-4efd-9ce4-6b5c42ac5218 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.096781825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cd9a7f2-0914-4efd-9ce4-6b5c42ac5218 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:20:29 multinode-862380 crio[2864]: time="2024-06-10 11:20:29.097229325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fc79e4de71d445e66c76ebc879593d2599c2c77229107f2a96a78737d49d6e,PodSandboxId:1daffe5524d188139839a6b1b96ad5ca5edfb98a6eff8bb442212a5c47d51c59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718018385981027011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d239e403b99cbc595846d609ca3877c0378cd522cc51a4ef8e62481693d5022,PodSandboxId:fe929d942cc9e63e145c553e0aa9f5268b3af05b033b39c69c2f4bf196375602,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718018352558458161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8e67a6d2840c27a7e9918a80a0a0c785dc7b6d2bd90a358d542bc6a1aabe74,PodSandboxId:abfa9aa50974623da5a50a69184494c217cf08dbc6007db84d76e812590ddb52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718018352479657098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fe
db5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55fbec1ed1f5c35125219a44fd079a722d49d9d8cbdb2455f8a70f01da71ed4e,PodSandboxId:4517c9efbd8541d8d1d37f445a576a5f35bb0182780f23bc213b682f1e16ae21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718018352360363094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]
string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a3bf0e596a0cca6f9831fcb9b458d5e853147197c42b8d6060f07e94f173f5,PodSandboxId:cdcc0f30f293274460a437197df073c4e406ed920aab513665fb6c4a8b4d8b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718018352319151234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e188af5e7614aace4ffe7147aadf26b4ae34f2212f99727a96e4a432272564dc,PodSandboxId:87bf102e2b2943355dabc72d3e0980da5c49276950d1ad4b2fc9c2f1f768e8e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718018347445798005,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dceb31898cc620cff1b69f4b915cc293db2955ad4fdfa09aaf24f4ba57bde1,PodSandboxId:dcd8d5c9c8cc1d7f6550cc6d27b429fa8028411f6868b679a6883186ce6898e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718018347411124949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5
e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50310344784fc7f085c0a0d226fde85f9b838c4bcfeaafbde1cf90adf4432aee,PodSandboxId:1a238893e319e44879cd357493747cefc3bd8860f007d2383c98f0d686678db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718018347413342565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5971ca1a108b34acbf6ae63f70db7b15d696e6cd577d1f3356a2b6661bb028d8,PodSandboxId:0b2ba625d3d8f5417652f5e20ac755f7fd3a72975d10e8ac6dd75ff553730dae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718018347339762997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc5324b6db46ad8a78594835c98c73f0f42d1c87636abde9b15fb4cbd4d2151,PodSandboxId:cfbc0a4db39045ee382b6a54d8d5f5da4410877bfde75f2ee86af08cede879e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718018047623152578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90,PodSandboxId:024549fd085df2c3f26e3b57056e36220f606174179776d0ec5517d7ab213ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718018002906701577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f,PodSandboxId:41beb7220db38d30d9a9e09ec9c7a266465505827ab8beb5023e3e210a3baa7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718018002842630237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc,PodSandboxId:47791e1db12ccb5a3125bf15245a19e55a3ce586fd87ad323ea1f816731386b1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718018001431173205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c,PodSandboxId:a2c6585397cfe84addb16de8bb37037463d7253e6320d81daa859502341f8f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718017997985587185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976,PodSandboxId:c88f109c2c83a6337b70493edeaa6bdda09624f9dbef45778d2ef091c19aeac1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718017978705276425,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702
,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f,PodSandboxId:dc44bfa9ee46200e44408345aa810713cfebf553e56e6a32f65ec6bd305edeb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718017978724535495,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:
map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266,PodSandboxId:10c8e06b75105c6690ee540a76a09dcc7cc12fcbdf5b36d4eb25ead4778cc4c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017978654023906,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb,PodSandboxId:04f27b50f52704344dd889054f4cf6da33cebd323a5db935ef89eb4abe78ffe8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017978635289553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7cd9a7f2-0914-4efd-9ce4-6b5c42ac5218 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c2fc79e4de71d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      43 seconds ago       Running             busybox                   1                   1daffe5524d18       busybox-fc5497c4f-jx8f9
	1d239e403b99c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   fe929d942cc9e       coredns-7db6d8ff4d-vfxw9
	5d8e67a6d2840       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               1                   abfa9aa509746       kindnet-bnpjz
	55fbec1ed1f5c       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      About a minute ago   Running             kube-proxy                1                   4517c9efbd854       kube-proxy-gghfj
	a0a3bf0e596a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   cdcc0f30f2932       storage-provisioner
	e188af5e7614a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   87bf102e2b294       etcd-multinode-862380
	50310344784fc       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      About a minute ago   Running             kube-scheduler            1                   1a238893e319e       kube-scheduler-multinode-862380
	43dceb31898cc       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   1                   dcd8d5c9c8cc1       kube-controller-manager-multinode-862380
	5971ca1a108b3       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            1                   0b2ba625d3d8f       kube-apiserver-multinode-862380
	7cc5324b6db46       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   cfbc0a4db3904       busybox-fc5497c4f-jx8f9
	e0cb3861c89e3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   024549fd085df       coredns-7db6d8ff4d-vfxw9
	b0bc49dc154cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   41beb7220db38       storage-provisioner
	f2791a953a920       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    7 minutes ago        Exited              kindnet-cni               0                   47791e1db12cc       kindnet-bnpjz
	d7dcbfcd0f6f9       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago        Exited              kube-proxy                0                   a2c6585397cfe       kube-proxy-gghfj
	9c465791f6493       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago        Exited              kube-scheduler            0                   dc44bfa9ee462       kube-scheduler-multinode-862380
	e7b3e1262dc38       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   c88f109c2c83a       etcd-multinode-862380
	58557fa016e58       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago        Exited              kube-apiserver            0                   10c8e06b75105       kube-apiserver-multinode-862380
	4f84f021658bb       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago        Exited              kube-controller-manager   0                   04f27b50f5270       kube-controller-manager-multinode-862380
	
	
	==> coredns [1d239e403b99cbc595846d609ca3877c0378cd522cc51a4ef8e62481693d5022] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36247 - 62168 "HINFO IN 1200695844873085136.5283719998216550195. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023148131s
	
	
	==> coredns [e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90] <==
	[INFO] 10.244.1.2:54313 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002039405s
	[INFO] 10.244.1.2:36796 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099531s
	[INFO] 10.244.1.2:42431 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067579s
	[INFO] 10.244.1.2:35027 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002059486s
	[INFO] 10.244.1.2:48138 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138374s
	[INFO] 10.244.1.2:57481 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072838s
	[INFO] 10.244.1.2:58012 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082046s
	[INFO] 10.244.0.3:34666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000072309s
	[INFO] 10.244.0.3:42571 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000508s
	[INFO] 10.244.0.3:33740 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046907s
	[INFO] 10.244.0.3:52883 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000032661s
	[INFO] 10.244.1.2:55811 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132338s
	[INFO] 10.244.1.2:44313 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083809s
	[INFO] 10.244.1.2:45315 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082396s
	[INFO] 10.244.1.2:40327 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067705s
	[INFO] 10.244.0.3:53262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125203s
	[INFO] 10.244.0.3:33362 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117514s
	[INFO] 10.244.0.3:55521 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000171111s
	[INFO] 10.244.0.3:34043 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080604s
	[INFO] 10.244.1.2:42263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117724s
	[INFO] 10.244.1.2:48635 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123254s
	[INFO] 10.244.1.2:42541 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119423s
	[INFO] 10.244.1.2:52962 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121279s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-862380
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-862380
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-862380
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T11_13_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-862380
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:20:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:19:11 +0000   Mon, 10 Jun 2024 11:12:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:19:11 +0000   Mon, 10 Jun 2024 11:12:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:19:11 +0000   Mon, 10 Jun 2024 11:12:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:19:11 +0000   Mon, 10 Jun 2024 11:13:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    multinode-862380
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8956567b7bc94df6916f5e4faa01fbfb
	  System UUID:                8956567b-7bc9-4df6-916f-5e4faa01fbfb
	  Boot ID:                    9746547f-4a12-4129-881a-ffbf15d2057e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jx8f9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	  kube-system                 coredns-7db6d8ff4d-vfxw9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m12s
	  kube-system                 etcd-multinode-862380                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m26s
	  kube-system                 kindnet-bnpjz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m12s
	  kube-system                 kube-apiserver-multinode-862380             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 kube-controller-manager-multinode-862380    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 kube-proxy-gghfj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-scheduler-multinode-862380             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m26s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m10s                  kube-proxy       
	  Normal  Starting                 76s                    kube-proxy       
	  Normal  Starting                 7m31s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m31s (x8 over 7m31s)  kubelet          Node multinode-862380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s (x8 over 7m31s)  kubelet          Node multinode-862380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m31s (x7 over 7m31s)  kubelet          Node multinode-862380 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m26s                  kubelet          Node multinode-862380 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m26s                  kubelet          Node multinode-862380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     7m26s                  kubelet          Node multinode-862380 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m26s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m13s                  node-controller  Node multinode-862380 event: Registered Node multinode-862380 in Controller
	  Normal  NodeReady                7m7s                   kubelet          Node multinode-862380 status is now: NodeReady
	  Normal  Starting                 83s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  83s (x8 over 83s)      kubelet          Node multinode-862380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x8 over 83s)      kubelet          Node multinode-862380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)      kubelet          Node multinode-862380 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                    node-controller  Node multinode-862380 event: Registered Node multinode-862380 in Controller
	
	
	Name:               multinode-862380-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-862380-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-862380
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T11_19_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:19:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-862380-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:20:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:20:20 +0000   Mon, 10 Jun 2024 11:19:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:20:20 +0000   Mon, 10 Jun 2024 11:19:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:20:20 +0000   Mon, 10 Jun 2024 11:19:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:20:20 +0000   Mon, 10 Jun 2024 11:19:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    multinode-862380-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 32d4e1e72b73492d8bcbcbaf9ac8e1d9
	  System UUID:                32d4e1e7-2b73-492d-8bcb-cbaf9ac8e1d9
	  Boot ID:                    2bb01a2f-dd28-47e1-b530-fb3cdee20701
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v8jhp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kindnet-ctwr4              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m37s
	  kube-system                 kube-proxy-n8lzw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 35s                    kube-proxy       
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m37s (x2 over 6m37s)  kubelet          Node multinode-862380-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x2 over 6m37s)  kubelet          Node multinode-862380-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x2 over 6m37s)  kubelet          Node multinode-862380-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m27s                  kubelet          Node multinode-862380-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  40s (x2 over 40s)      kubelet          Node multinode-862380-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x2 over 40s)      kubelet          Node multinode-862380-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x2 over 40s)      kubelet          Node multinode-862380-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           36s                    node-controller  Node multinode-862380-m02 event: Registered Node multinode-862380-m02 in Controller
	  Normal  NodeReady                32s                    kubelet          Node multinode-862380-m02 status is now: NodeReady
	
	
	Name:               multinode-862380-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-862380-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-862380
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T11_20_17_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:20:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-862380-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:20:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:20:26 +0000   Mon, 10 Jun 2024 11:20:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:20:26 +0000   Mon, 10 Jun 2024 11:20:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:20:26 +0000   Mon, 10 Jun 2024 11:20:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:20:26 +0000   Mon, 10 Jun 2024 11:20:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    multinode-862380-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 08f26f5d22a74875ab1de2fe1bf48d3c
	  System UUID:                08f26f5d-22a7-4875-ab1d-e2fe1bf48d3c
	  Boot ID:                    f33b097c-0b7d-4178-85c7-9954f4aa4bd3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-mqzsw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m54s
	  kube-system                 kube-proxy-7gbwh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m10s                  kube-proxy  
	  Normal  Starting                 5m48s                  kube-proxy  
	  Normal  Starting                 7s                     kube-proxy  
	  Normal  NodeHasSufficientMemory  5m54s (x2 over 5m54s)  kubelet     Node multinode-862380-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s (x2 over 5m54s)  kubelet     Node multinode-862380-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s (x2 over 5m54s)  kubelet     Node multinode-862380-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m54s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m54s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m44s                  kubelet     Node multinode-862380-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m15s (x2 over 5m15s)  kubelet     Node multinode-862380-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m15s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m15s (x2 over 5m15s)  kubelet     Node multinode-862380-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m15s (x2 over 5m15s)  kubelet     Node multinode-862380-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m6s                   kubelet     Node multinode-862380-m03 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  13s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12s (x2 over 13s)      kubelet     Node multinode-862380-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 13s)      kubelet     Node multinode-862380-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 13s)      kubelet     Node multinode-862380-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3s                     kubelet     Node multinode-862380-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.053278] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.158847] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.141099] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.248906] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +3.896513] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +3.984003] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.065848] kauditd_printk_skb: 158 callbacks suppressed
	[Jun10 11:13] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.069613] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.040712] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.106567] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.001235] kauditd_printk_skb: 60 callbacks suppressed
	[Jun10 11:14] kauditd_printk_skb: 14 callbacks suppressed
	[Jun10 11:18] systemd-fstab-generator[2776]: Ignoring "noauto" option for root device
	[  +0.145732] systemd-fstab-generator[2788]: Ignoring "noauto" option for root device
	[  +0.171468] systemd-fstab-generator[2802]: Ignoring "noauto" option for root device
	[  +0.133880] systemd-fstab-generator[2814]: Ignoring "noauto" option for root device
	[  +0.267198] systemd-fstab-generator[2842]: Ignoring "noauto" option for root device
	[Jun10 11:19] systemd-fstab-generator[2949]: Ignoring "noauto" option for root device
	[  +2.123912] systemd-fstab-generator[3071]: Ignoring "noauto" option for root device
	[  +0.081701] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.583360] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.470341] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.972924] systemd-fstab-generator[3880]: Ignoring "noauto" option for root device
	[ +21.276139] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [e188af5e7614aace4ffe7147aadf26b4ae34f2212f99727a96e4a432272564dc] <==
	{"level":"info","ts":"2024-06-10T11:19:07.956639Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T11:19:07.95665Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T11:19:07.957215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 switched to configuration voters=(3636168928135421492)"}
	{"level":"info","ts":"2024-06-10T11:19:07.957324Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","added-peer-id":"3276445ff8d31e34","added-peer-peer-urls":["https://192.168.39.100:2380"]}
	{"level":"info","ts":"2024-06-10T11:19:07.959681Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:19:07.959768Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:19:07.964388Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-10T11:19:07.972873Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3276445ff8d31e34","initial-advertise-peer-urls":["https://192.168.39.100:2380"],"listen-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.100:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-10T11:19:07.97304Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-10T11:19:07.964785Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-06-10T11:19:07.976218Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-06-10T11:19:09.597332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-10T11:19:09.597383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-10T11:19:09.597431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgPreVoteResp from 3276445ff8d31e34 at term 2"}
	{"level":"info","ts":"2024-06-10T11:19:09.597444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became candidate at term 3"}
	{"level":"info","ts":"2024-06-10T11:19:09.597449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgVoteResp from 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-06-10T11:19:09.597457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became leader at term 3"}
	{"level":"info","ts":"2024-06-10T11:19:09.597467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3276445ff8d31e34 elected leader 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-06-10T11:19:09.602782Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3276445ff8d31e34","local-member-attributes":"{Name:multinode-862380 ClientURLs:[https://192.168.39.100:2379]}","request-path":"/0/members/3276445ff8d31e34/attributes","cluster-id":"6cf58294dcaef1c8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T11:19:09.60293Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:19:09.60321Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T11:19:09.603281Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T11:19:09.603354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:19:09.605203Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2024-06-10T11:19:09.605219Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976] <==
	{"level":"info","ts":"2024-06-10T11:12:59.902914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:12:59.903237Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:12:59.903369Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T11:12:59.903411Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T11:12:59.903727Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:12:59.903832Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:12:59.90387Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:12:59.905259Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T11:12:59.91302Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2024-06-10T11:13:52.704222Z","caller":"traceutil/trace.go:171","msg":"trace[2123348808] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"200.267207ms","start":"2024-06-10T11:13:52.503933Z","end":"2024-06-10T11:13:52.7042Z","steps":["trace[2123348808] 'process raft request'  (duration: 200.213634ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:13:52.704238Z","caller":"traceutil/trace.go:171","msg":"trace[413522120] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"264.989925ms","start":"2024-06-10T11:13:52.439231Z","end":"2024-06-10T11:13:52.704221Z","steps":["trace[413522120] 'process raft request'  (duration: 235.582093ms)","trace[413522120] 'compare'  (duration: 29.228493ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T11:14:35.800583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.497195ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2176522857552705310 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-862380-m03.17d7a056559f99e5\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-862380-m03.17d7a056559f99e5\" value_size:646 lease:2176522857552705035 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-10T11:14:35.801111Z","caller":"traceutil/trace.go:171","msg":"trace[172763151] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"249.521196ms","start":"2024-06-10T11:14:35.551554Z","end":"2024-06-10T11:14:35.801075Z","steps":["trace[172763151] 'process raft request'  (duration: 79.446624ms)","trace[172763151] 'compare'  (duration: 168.303313ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T11:14:35.801307Z","caller":"traceutil/trace.go:171","msg":"trace[1401501465] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"186.092785ms","start":"2024-06-10T11:14:35.615201Z","end":"2024-06-10T11:14:35.801294Z","steps":["trace[1401501465] 'process raft request'  (duration: 185.848442ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:14:41.581876Z","caller":"traceutil/trace.go:171","msg":"trace[56415643] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"175.924925ms","start":"2024-06-10T11:14:41.405936Z","end":"2024-06-10T11:14:41.581861Z","steps":["trace[56415643] 'process raft request'  (duration: 175.808057ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:17:28.684927Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-10T11:17:28.685073Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-862380","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	{"level":"warn","ts":"2024-06-10T11:17:28.685184Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T11:17:28.685269Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T11:17:28.76712Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T11:17:28.767349Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-10T11:17:28.7675Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3276445ff8d31e34","current-leader-member-id":"3276445ff8d31e34"}
	{"level":"info","ts":"2024-06-10T11:17:28.770027Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-06-10T11:17:28.770184Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-06-10T11:17:28.770218Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-862380","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> kernel <==
	 11:20:29 up 8 min,  0 users,  load average: 0.21, 0.22, 0.12
	Linux multinode-862380 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5d8e67a6d2840c27a7e9918a80a0a0c785dc7b6d2bd90a358d542bc6a1aabe74] <==
	I0610 11:19:43.322003       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:19:53.335669       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:19:53.335715       1 main.go:227] handling current node
	I0610 11:19:53.335729       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:19:53.335734       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:19:53.335851       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:19:53.335870       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:20:03.341478       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:20:03.341527       1 main.go:227] handling current node
	I0610 11:20:03.341540       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:20:03.341547       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:20:03.341739       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:20:03.341764       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:20:13.352132       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:20:13.352282       1 main.go:227] handling current node
	I0610 11:20:13.352350       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:20:13.352380       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:20:13.352514       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:20:13.352535       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:20:23.357553       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:20:23.357748       1 main.go:227] handling current node
	I0610 11:20:23.357789       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:20:23.357811       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:20:23.357986       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:20:23.358031       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc] <==
	I0610 11:16:42.182660       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:16:52.195382       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:16:52.195667       1 main.go:227] handling current node
	I0610 11:16:52.195712       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:16:52.195733       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:16:52.196527       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:16:52.196576       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:17:02.201120       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:17:02.201157       1 main.go:227] handling current node
	I0610 11:17:02.201170       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:17:02.201174       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:17:02.201290       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:17:02.201310       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:17:12.206011       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:17:12.206052       1 main.go:227] handling current node
	I0610 11:17:12.206077       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:17:12.206082       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:17:12.206206       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:17:12.206223       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:17:22.210300       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:17:22.210339       1 main.go:227] handling current node
	I0610 11:17:22.210349       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:17:22.210354       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:17:22.210478       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:17:22.210498       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266] <==
	I0610 11:13:02.123301       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 11:13:02.123905       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 11:13:02.777087       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 11:13:02.828765       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 11:13:02.954007       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0610 11:13:02.965284       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100]
	I0610 11:13:02.966672       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 11:13:02.971527       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 11:13:03.186545       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 11:13:03.905440       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 11:13:03.923077       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0610 11:13:03.947471       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 11:13:17.392025       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0610 11:13:17.443188       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0610 11:14:09.028969       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60212: use of closed network connection
	E0610 11:14:09.201865       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60226: use of closed network connection
	E0610 11:14:09.398218       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60240: use of closed network connection
	E0610 11:14:09.568016       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60260: use of closed network connection
	E0610 11:14:09.734030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60272: use of closed network connection
	E0610 11:14:09.901761       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60288: use of closed network connection
	E0610 11:14:10.167362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60324: use of closed network connection
	E0610 11:14:10.328305       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60356: use of closed network connection
	E0610 11:14:10.486910       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60376: use of closed network connection
	E0610 11:14:10.646028       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60394: use of closed network connection
	I0610 11:17:28.677586       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [5971ca1a108b34acbf6ae63f70db7b15d696e6cd577d1f3356a2b6661bb028d8] <==
	I0610 11:19:10.942920       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 11:19:10.947489       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 11:19:10.950676       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 11:19:10.967143       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 11:19:10.971470       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 11:19:10.950406       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 11:19:10.950485       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 11:19:10.950497       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 11:19:10.950506       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 11:19:10.974055       1 aggregator.go:165] initial CRD sync complete...
	I0610 11:19:10.974063       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 11:19:10.974067       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 11:19:10.974072       1 cache.go:39] Caches are synced for autoregister controller
	E0610 11:19:10.983113       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0610 11:19:11.013538       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 11:19:11.013675       1 policy_source.go:224] refreshing policies
	I0610 11:19:11.056087       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 11:19:11.871471       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 11:19:13.259778       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 11:19:13.392805       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 11:19:13.409232       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 11:19:13.477285       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 11:19:13.496061       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 11:19:23.584430       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 11:19:23.656893       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [43dceb31898cc620cff1b69f4b915cc293db2955ad4fdfa09aaf24f4ba57bde1] <==
	I0610 11:19:24.192010       1 shared_informer.go:320] Caches are synced for garbage collector
	I0610 11:19:24.192099       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0610 11:19:44.832547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.703983ms"
	I0610 11:19:44.832698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.486µs"
	I0610 11:19:44.845446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.368816ms"
	I0610 11:19:44.845768       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.34µs"
	I0610 11:19:49.142424       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-862380-m02\" does not exist"
	I0610 11:19:49.162087       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-862380-m02" podCIDRs=["10.244.1.0/24"]
	I0610 11:19:49.836516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.849µs"
	I0610 11:19:51.028937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.15µs"
	I0610 11:19:51.039967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.412µs"
	I0610 11:19:51.050531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.741µs"
	I0610 11:19:51.088299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.973µs"
	I0610 11:19:51.100367       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.58µs"
	I0610 11:19:51.102471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.533µs"
	I0610 11:19:57.905355       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:19:57.926313       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.938µs"
	I0610 11:19:57.957968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.28µs"
	I0610 11:20:01.519676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.419629ms"
	I0610 11:20:01.520001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="123.31µs"
	I0610 11:20:15.948796       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:20:17.066171       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:20:17.066546       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-862380-m03\" does not exist"
	I0610 11:20:17.081113       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-862380-m03" podCIDRs=["10.244.2.0/24"]
	I0610 11:20:26.281562       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	
	
	==> kube-controller-manager [4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb] <==
	I0610 11:13:52.708660       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-862380-m02\" does not exist"
	I0610 11:13:52.722288       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-862380-m02" podCIDRs=["10.244.1.0/24"]
	I0610 11:13:56.599440       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-862380-m02"
	I0610 11:14:02.696231       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:14:04.812013       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.073902ms"
	I0610 11:14:04.844703       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.653908ms"
	I0610 11:14:04.856741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.989681ms"
	I0610 11:14:04.856833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.253µs"
	I0610 11:14:08.135921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.773641ms"
	I0610 11:14:08.136390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.171µs"
	I0610 11:14:08.608550       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.698712ms"
	I0610 11:14:08.608717       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.332µs"
	I0610 11:14:35.805259       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:14:35.813072       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-862380-m03\" does not exist"
	I0610 11:14:35.844034       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-862380-m03" podCIDRs=["10.244.2.0/24"]
	I0610 11:14:36.619041       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-862380-m03"
	I0610 11:14:45.309474       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:15:13.364901       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:15:14.709349       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-862380-m03\" does not exist"
	I0610 11:15:14.710042       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:15:14.728756       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-862380-m03" podCIDRs=["10.244.3.0/24"]
	I0610 11:15:23.264974       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:16:06.669704       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m03"
	I0610 11:16:06.711832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.68571ms"
	I0610 11:16:06.711963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.605µs"
	
	
	==> kube-proxy [55fbec1ed1f5c35125219a44fd079a722d49d9d8cbdb2455f8a70f01da71ed4e] <==
	I0610 11:19:12.780213       1 server_linux.go:69] "Using iptables proxy"
	I0610 11:19:12.859163       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0610 11:19:12.979822       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 11:19:12.979893       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 11:19:12.979909       1 server_linux.go:165] "Using iptables Proxier"
	I0610 11:19:12.984330       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 11:19:12.984557       1 server.go:872] "Version info" version="v1.30.1"
	I0610 11:19:12.984646       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:19:12.985979       1 config.go:192] "Starting service config controller"
	I0610 11:19:12.986048       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 11:19:12.986092       1 config.go:101] "Starting endpoint slice config controller"
	I0610 11:19:12.986109       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 11:19:12.986671       1 config.go:319] "Starting node config controller"
	I0610 11:19:12.986710       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 11:19:13.086408       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 11:19:13.086461       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:19:13.088348       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c] <==
	I0610 11:13:18.447696       1 server_linux.go:69] "Using iptables proxy"
	I0610 11:13:18.456560       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0610 11:13:18.518217       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 11:13:18.518283       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 11:13:18.518306       1 server_linux.go:165] "Using iptables Proxier"
	I0610 11:13:18.523051       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 11:13:18.526662       1 server.go:872] "Version info" version="v1.30.1"
	I0610 11:13:18.528913       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:13:18.535521       1 config.go:192] "Starting service config controller"
	I0610 11:13:18.535558       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 11:13:18.535590       1 config.go:101] "Starting endpoint slice config controller"
	I0610 11:13:18.535623       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 11:13:18.537791       1 config.go:319] "Starting node config controller"
	I0610 11:13:18.537838       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 11:13:18.635911       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:13:18.635937       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 11:13:18.637971       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [50310344784fc7f085c0a0d226fde85f9b838c4bcfeaafbde1cf90adf4432aee] <==
	I0610 11:19:08.529645       1 serving.go:380] Generated self-signed cert in-memory
	W0610 11:19:10.875009       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 11:19:10.875148       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 11:19:10.875211       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 11:19:10.875258       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 11:19:10.942932       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 11:19:10.943092       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:19:10.949867       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 11:19:10.950174       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 11:19:10.950246       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 11:19:10.950283       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 11:19:11.050461       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f] <==
	E0610 11:13:02.071870       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 11:13:02.112485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 11:13:02.112516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 11:13:02.116397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 11:13:02.116437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 11:13:02.116860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 11:13:02.116897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 11:13:02.127966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 11:13:02.128007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 11:13:02.162315       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 11:13:02.162358       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 11:13:02.179534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 11:13:02.179669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 11:13:02.259443       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 11:13:02.259558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 11:13:02.427099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 11:13:02.427177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 11:13:02.432354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 11:13:02.432396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 11:13:02.444411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 11:13:02.444450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 11:13:02.549958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 11:13:02.550000       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 11:13:05.027573       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0610 11:17:28.690834       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 10 11:19:07 multinode-862380 kubelet[3078]: E0610 11:19:07.454780    3078 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.100:8443: connect: connection refused" node="multinode-862380"
	Jun 10 11:19:08 multinode-862380 kubelet[3078]: I0610 11:19:08.256222    3078 kubelet_node_status.go:73] "Attempting to register node" node="multinode-862380"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.104108    3078 kubelet_node_status.go:112] "Node was previously registered" node="multinode-862380"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.104198    3078 kubelet_node_status.go:76] "Successfully registered node" node="multinode-862380"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.105426    3078 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.106644    3078 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.723540    3078 apiserver.go:52] "Watching apiserver"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.727075    3078 topology_manager.go:215] "Topology Admit Handler" podUID="6d6d1e96-ea64-4ea0-855a-0e8fedb5164d" podNamespace="kube-system" podName="kindnet-bnpjz"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.727209    3078 topology_manager.go:215] "Topology Admit Handler" podUID="56f70aa4-9ef6-4257-86b3-4fd0968b2e37" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vfxw9"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.727281    3078 topology_manager.go:215] "Topology Admit Handler" podUID="d6793da8-f52b-488b-a0ec-88cbf6460c13" podNamespace="kube-system" podName="kube-proxy-gghfj"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.727408    3078 topology_manager.go:215] "Topology Admit Handler" podUID="7966a309-dca2-488e-b683-0ff37fa01fe3" podNamespace="kube-system" podName="storage-provisioner"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.727516    3078 topology_manager.go:215] "Topology Admit Handler" podUID="237e1205-8c4b-4234-ad0f-80e35f097827" podNamespace="default" podName="busybox-fc5497c4f-jx8f9"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.745147    3078 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.833983    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d6d1e96-ea64-4ea0-855a-0e8fedb5164d-lib-modules\") pod \"kindnet-bnpjz\" (UID: \"6d6d1e96-ea64-4ea0-855a-0e8fedb5164d\") " pod="kube-system/kindnet-bnpjz"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.834100    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7966a309-dca2-488e-b683-0ff37fa01fe3-tmp\") pod \"storage-provisioner\" (UID: \"7966a309-dca2-488e-b683-0ff37fa01fe3\") " pod="kube-system/storage-provisioner"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.834187    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d6d1e96-ea64-4ea0-855a-0e8fedb5164d-xtables-lock\") pod \"kindnet-bnpjz\" (UID: \"6d6d1e96-ea64-4ea0-855a-0e8fedb5164d\") " pod="kube-system/kindnet-bnpjz"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.835343    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6793da8-f52b-488b-a0ec-88cbf6460c13-lib-modules\") pod \"kube-proxy-gghfj\" (UID: \"d6793da8-f52b-488b-a0ec-88cbf6460c13\") " pod="kube-system/kube-proxy-gghfj"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.835474    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6d6d1e96-ea64-4ea0-855a-0e8fedb5164d-cni-cfg\") pod \"kindnet-bnpjz\" (UID: \"6d6d1e96-ea64-4ea0-855a-0e8fedb5164d\") " pod="kube-system/kindnet-bnpjz"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.835726    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6793da8-f52b-488b-a0ec-88cbf6460c13-xtables-lock\") pod \"kube-proxy-gghfj\" (UID: \"d6793da8-f52b-488b-a0ec-88cbf6460c13\") " pod="kube-system/kube-proxy-gghfj"
	Jun 10 11:19:20 multinode-862380 kubelet[3078]: I0610 11:19:20.131198    3078 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 10 11:20:06 multinode-862380 kubelet[3078]: E0610 11:20:06.771336    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:20:06 multinode-862380 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:20:06 multinode-862380 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:20:06 multinode-862380 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:20:06 multinode-862380 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:20:28.694715   42155 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19046-3880/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-862380 -n multinode-862380
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-862380 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (304.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 stop
E0610 11:21:57.913708   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-862380 stop: exit status 82 (2m0.461287644s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-862380-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-862380 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-862380 status: exit status 3 (18.849001033s)

                                                
                                                
-- stdout --
	multinode-862380
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-862380-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:22:51.969301   42832 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host
	E0610 11:22:51.969339   42832 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.47:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-862380 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-862380 -n multinode-862380
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-862380 logs -n 25: (1.413232129s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp multinode-862380-m02:/home/docker/cp-test.txt                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380:/home/docker/cp-test_multinode-862380-m02_multinode-862380.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n multinode-862380 sudo cat                                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | /home/docker/cp-test_multinode-862380-m02_multinode-862380.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp multinode-862380-m02:/home/docker/cp-test.txt                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03:/home/docker/cp-test_multinode-862380-m02_multinode-862380-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n multinode-862380-m03 sudo cat                                   | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | /home/docker/cp-test_multinode-862380-m02_multinode-862380-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp testdata/cp-test.txt                                                | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp multinode-862380-m03:/home/docker/cp-test.txt                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4163337793/001/cp-test_multinode-862380-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp multinode-862380-m03:/home/docker/cp-test.txt                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380:/home/docker/cp-test_multinode-862380-m03_multinode-862380.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n multinode-862380 sudo cat                                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | /home/docker/cp-test_multinode-862380-m03_multinode-862380.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-862380 cp multinode-862380-m03:/home/docker/cp-test.txt                       | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m02:/home/docker/cp-test_multinode-862380-m03_multinode-862380-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n                                                                 | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | multinode-862380-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-862380 ssh -n multinode-862380-m02 sudo cat                                   | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	|         | /home/docker/cp-test_multinode-862380-m03_multinode-862380-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-862380 node stop m03                                                          | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:14 UTC |
	| node    | multinode-862380 node start                                                             | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:14 UTC | 10 Jun 24 11:15 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-862380                                                                | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:15 UTC |                     |
	| stop    | -p multinode-862380                                                                     | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:15 UTC |                     |
	| start   | -p multinode-862380                                                                     | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:17 UTC | 10 Jun 24 11:20 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-862380                                                                | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:20 UTC |                     |
	| node    | multinode-862380 node delete                                                            | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:20 UTC | 10 Jun 24 11:20 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-862380 stop                                                                   | multinode-862380 | jenkins | v1.33.1 | 10 Jun 24 11:20 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 11:17:27
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 11:17:27.742136   40730 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:17:27.742357   40730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:17:27.742369   40730 out.go:304] Setting ErrFile to fd 2...
	I0610 11:17:27.742376   40730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:17:27.742815   40730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:17:27.743426   40730 out.go:298] Setting JSON to false
	I0610 11:17:27.744279   40730 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3589,"bootTime":1718014659,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:17:27.744339   40730 start.go:139] virtualization: kvm guest
	I0610 11:17:27.746715   40730 out.go:177] * [multinode-862380] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:17:27.748511   40730 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:17:27.748446   40730 notify.go:220] Checking for updates...
	I0610 11:17:27.749869   40730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:17:27.751357   40730 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:17:27.752674   40730 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:17:27.754079   40730 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:17:27.755378   40730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:17:27.757046   40730 config.go:182] Loaded profile config "multinode-862380": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:17:27.757156   40730 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:17:27.757548   40730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:17:27.757589   40730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:17:27.773929   40730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
	I0610 11:17:27.774423   40730 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:17:27.775103   40730 main.go:141] libmachine: Using API Version  1
	I0610 11:17:27.775124   40730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:17:27.775534   40730 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:17:27.775762   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:17:27.813107   40730 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 11:17:27.814545   40730 start.go:297] selected driver: kvm2
	I0610 11:17:27.814561   40730 start.go:901] validating driver "kvm2" against &{Name:multinode-862380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-862380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:17:27.814739   40730 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:17:27.815129   40730 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:17:27.815205   40730 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 11:17:27.831006   40730 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 11:17:27.831638   40730 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:17:27.831683   40730 cni.go:84] Creating CNI manager for ""
	I0610 11:17:27.831694   40730 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 11:17:27.831744   40730 start.go:340] cluster config:
	{Name:multinode-862380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-862380 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:17:27.831849   40730 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:17:27.833997   40730 out.go:177] * Starting "multinode-862380" primary control-plane node in "multinode-862380" cluster
	I0610 11:17:27.835470   40730 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:17:27.835508   40730 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 11:17:27.835517   40730 cache.go:56] Caching tarball of preloaded images
	I0610 11:17:27.835595   40730 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:17:27.835606   40730 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 11:17:27.835721   40730 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/config.json ...
	I0610 11:17:27.835928   40730 start.go:360] acquireMachinesLock for multinode-862380: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:17:27.835974   40730 start.go:364] duration metric: took 24.029µs to acquireMachinesLock for "multinode-862380"
	I0610 11:17:27.835987   40730 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:17:27.835995   40730 fix.go:54] fixHost starting: 
	I0610 11:17:27.836236   40730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:17:27.836256   40730 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:17:27.851223   40730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39651
	I0610 11:17:27.851613   40730 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:17:27.852228   40730 main.go:141] libmachine: Using API Version  1
	I0610 11:17:27.852257   40730 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:17:27.852673   40730 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:17:27.852909   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:17:27.853094   40730 main.go:141] libmachine: (multinode-862380) Calling .GetState
	I0610 11:17:27.854904   40730 fix.go:112] recreateIfNeeded on multinode-862380: state=Running err=<nil>
	W0610 11:17:27.854966   40730 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:17:27.856939   40730 out.go:177] * Updating the running kvm2 "multinode-862380" VM ...
	I0610 11:17:27.858425   40730 machine.go:94] provisionDockerMachine start ...
	I0610 11:17:27.858444   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:17:27.858675   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:27.861582   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:27.862219   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:27.862261   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:27.862452   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:17:27.862655   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:27.862811   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:27.862923   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:17:27.863077   40730 main.go:141] libmachine: Using SSH client type: native
	I0610 11:17:27.863383   40730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0610 11:17:27.863397   40730 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:17:27.985941   40730 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-862380
	
	I0610 11:17:27.985982   40730 main.go:141] libmachine: (multinode-862380) Calling .GetMachineName
	I0610 11:17:27.986204   40730 buildroot.go:166] provisioning hostname "multinode-862380"
	I0610 11:17:27.986245   40730 main.go:141] libmachine: (multinode-862380) Calling .GetMachineName
	I0610 11:17:27.986472   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:27.989280   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:27.989783   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:27.989812   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:27.989981   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:17:27.990173   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:27.990346   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:27.990491   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:17:27.990630   40730 main.go:141] libmachine: Using SSH client type: native
	I0610 11:17:27.990790   40730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0610 11:17:27.990810   40730 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-862380 && echo "multinode-862380" | sudo tee /etc/hostname
	I0610 11:17:28.128093   40730 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-862380
	
	I0610 11:17:28.128124   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:28.131162   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.131624   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:28.131638   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.131847   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:17:28.132033   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:28.132194   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:28.132351   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:17:28.132546   40730 main.go:141] libmachine: Using SSH client type: native
	I0610 11:17:28.132734   40730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0610 11:17:28.132752   40730 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-862380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-862380/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-862380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:17:28.245813   40730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:17:28.245851   40730 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 11:17:28.245876   40730 buildroot.go:174] setting up certificates
	I0610 11:17:28.245886   40730 provision.go:84] configureAuth start
	I0610 11:17:28.245903   40730 main.go:141] libmachine: (multinode-862380) Calling .GetMachineName
	I0610 11:17:28.246189   40730 main.go:141] libmachine: (multinode-862380) Calling .GetIP
	I0610 11:17:28.248852   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.249297   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:28.249326   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.249486   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:28.252038   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.252459   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:28.252496   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.252592   40730 provision.go:143] copyHostCerts
	I0610 11:17:28.252621   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:17:28.252679   40730 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 11:17:28.252687   40730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:17:28.252754   40730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 11:17:28.252832   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:17:28.252849   40730 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 11:17:28.252855   40730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:17:28.252881   40730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 11:17:28.252928   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:17:28.252963   40730 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 11:17:28.252973   40730 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:17:28.253002   40730 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 11:17:28.253053   40730 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.multinode-862380 san=[127.0.0.1 192.168.39.100 localhost minikube multinode-862380]
	I0610 11:17:28.383126   40730 provision.go:177] copyRemoteCerts
	I0610 11:17:28.383208   40730 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:17:28.383241   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:28.385756   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.386110   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:28.386141   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.386340   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:17:28.386519   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:28.386709   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:17:28.386808   40730 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/multinode-862380/id_rsa Username:docker}
	I0610 11:17:28.478398   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 11:17:28.478476   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:17:28.502382   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 11:17:28.502444   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 11:17:28.525674   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 11:17:28.525733   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 11:17:28.549055   40730 provision.go:87] duration metric: took 303.155105ms to configureAuth
	I0610 11:17:28.549081   40730 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:17:28.549263   40730 config.go:182] Loaded profile config "multinode-862380": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:17:28.549323   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:17:28.551963   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.552308   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:17:28.552337   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:17:28.552519   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:17:28.552709   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:28.552856   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:17:28.553005   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:17:28.553150   40730 main.go:141] libmachine: Using SSH client type: native
	I0610 11:17:28.553314   40730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0610 11:17:28.553328   40730 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 11:18:59.348298   40730 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 11:18:59.348320   40730 machine.go:97] duration metric: took 1m31.489881873s to provisionDockerMachine
	I0610 11:18:59.348333   40730 start.go:293] postStartSetup for "multinode-862380" (driver="kvm2")
	I0610 11:18:59.348348   40730 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:18:59.348363   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:18:59.348685   40730 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:18:59.348715   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:18:59.351889   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.352391   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:18:59.352416   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.352570   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:18:59.352756   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:18:59.352914   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:18:59.353068   40730 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/multinode-862380/id_rsa Username:docker}
	I0610 11:18:59.441547   40730 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:18:59.445446   40730 command_runner.go:130] > NAME=Buildroot
	I0610 11:18:59.445468   40730 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0610 11:18:59.445474   40730 command_runner.go:130] > ID=buildroot
	I0610 11:18:59.445482   40730 command_runner.go:130] > VERSION_ID=2023.02.9
	I0610 11:18:59.445489   40730 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0610 11:18:59.445580   40730 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:18:59.445596   40730 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 11:18:59.445665   40730 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 11:18:59.445753   40730 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 11:18:59.445763   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /etc/ssl/certs/107582.pem
	I0610 11:18:59.445862   40730 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:18:59.454820   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:18:59.478152   40730 start.go:296] duration metric: took 129.804765ms for postStartSetup
	I0610 11:18:59.478230   40730 fix.go:56] duration metric: took 1m31.64223361s for fixHost
	I0610 11:18:59.478254   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:18:59.480834   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.481323   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:18:59.481348   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.481530   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:18:59.481738   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:18:59.481891   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:18:59.482040   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:18:59.482214   40730 main.go:141] libmachine: Using SSH client type: native
	I0610 11:18:59.482370   40730 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0610 11:18:59.482380   40730 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:18:59.593390   40730 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718018339.576506580
	
	I0610 11:18:59.593412   40730 fix.go:216] guest clock: 1718018339.576506580
	I0610 11:18:59.593422   40730 fix.go:229] Guest: 2024-06-10 11:18:59.57650658 +0000 UTC Remote: 2024-06-10 11:18:59.478235633 +0000 UTC m=+91.771991040 (delta=98.270947ms)
	I0610 11:18:59.593456   40730 fix.go:200] guest clock delta is within tolerance: 98.270947ms
	I0610 11:18:59.593462   40730 start.go:83] releasing machines lock for "multinode-862380", held for 1m31.757479488s
	I0610 11:18:59.593493   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:18:59.593726   40730 main.go:141] libmachine: (multinode-862380) Calling .GetIP
	I0610 11:18:59.596171   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.596567   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:18:59.596595   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.596750   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:18:59.597319   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:18:59.597508   40730 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:18:59.597566   40730 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:18:59.597622   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:18:59.597731   40730 ssh_runner.go:195] Run: cat /version.json
	I0610 11:18:59.597752   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:18:59.600117   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.600383   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.600441   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:18:59.600469   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.600559   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:18:59.600723   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:18:59.600815   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:18:59.600838   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:18:59.600906   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:18:59.601031   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:18:59.601104   40730 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/multinode-862380/id_rsa Username:docker}
	I0610 11:18:59.601190   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:18:59.601332   40730 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:18:59.601456   40730 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/multinode-862380/id_rsa Username:docker}
	I0610 11:18:59.681468   40730 command_runner.go:130] > {"iso_version": "v1.33.1-1717668912-19038", "kicbase_version": "v0.0.44-1717518322-19024", "minikube_version": "v1.33.1", "commit": "7bc04027a908a7d4d31c30e8938372fcb07a9689"}
	I0610 11:18:59.711411   40730 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0610 11:18:59.712150   40730 ssh_runner.go:195] Run: systemctl --version
	I0610 11:18:59.718080   40730 command_runner.go:130] > systemd 252 (252)
	I0610 11:18:59.718123   40730 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0610 11:18:59.718187   40730 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 11:18:59.876483   40730 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0610 11:18:59.881982   40730 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0610 11:18:59.882034   40730 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:18:59.882096   40730 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:18:59.891440   40730 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0610 11:18:59.891466   40730 start.go:494] detecting cgroup driver to use...
	I0610 11:18:59.891556   40730 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:18:59.909949   40730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:18:59.923584   40730 docker.go:217] disabling cri-docker service (if available) ...
	I0610 11:18:59.923649   40730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 11:18:59.937462   40730 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 11:18:59.951603   40730 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 11:19:00.105032   40730 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 11:19:00.245811   40730 docker.go:233] disabling docker service ...
	I0610 11:19:00.245897   40730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 11:19:00.263047   40730 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 11:19:00.276364   40730 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 11:19:00.416005   40730 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 11:19:00.557845   40730 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 11:19:00.571660   40730 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:19:00.589228   40730 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0610 11:19:00.589680   40730 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 11:19:00.589735   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.599480   40730 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 11:19:00.599548   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.609285   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.619165   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.629352   40730 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:19:00.639278   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.649121   40730 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.659762   40730 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:19:00.669552   40730 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:19:00.678331   40730 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0610 11:19:00.678413   40730 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:19:00.687477   40730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:19:00.819219   40730 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 11:19:04.048749   40730 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.229491822s)
	I0610 11:19:04.048783   40730 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 11:19:04.048826   40730 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 11:19:04.053224   40730 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0610 11:19:04.053256   40730 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0610 11:19:04.053266   40730 command_runner.go:130] > Device: 0,22	Inode: 1324        Links: 1
	I0610 11:19:04.053275   40730 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 11:19:04.053282   40730 command_runner.go:130] > Access: 2024-06-10 11:19:03.915710586 +0000
	I0610 11:19:04.053291   40730 command_runner.go:130] > Modify: 2024-06-10 11:19:03.915710586 +0000
	I0610 11:19:04.053298   40730 command_runner.go:130] > Change: 2024-06-10 11:19:03.915710586 +0000
	I0610 11:19:04.053303   40730 command_runner.go:130] >  Birth: -
	I0610 11:19:04.053355   40730 start.go:562] Will wait 60s for crictl version
	I0610 11:19:04.053406   40730 ssh_runner.go:195] Run: which crictl
	I0610 11:19:04.056899   40730 command_runner.go:130] > /usr/bin/crictl
	I0610 11:19:04.056982   40730 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:19:04.092521   40730 command_runner.go:130] > Version:  0.1.0
	I0610 11:19:04.092544   40730 command_runner.go:130] > RuntimeName:  cri-o
	I0610 11:19:04.092549   40730 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0610 11:19:04.092554   40730 command_runner.go:130] > RuntimeApiVersion:  v1
	I0610 11:19:04.092571   40730 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 11:19:04.092637   40730 ssh_runner.go:195] Run: crio --version
	I0610 11:19:04.122131   40730 command_runner.go:130] > crio version 1.29.1
	I0610 11:19:04.122154   40730 command_runner.go:130] > Version:        1.29.1
	I0610 11:19:04.122162   40730 command_runner.go:130] > GitCommit:      unknown
	I0610 11:19:04.122168   40730 command_runner.go:130] > GitCommitDate:  unknown
	I0610 11:19:04.122174   40730 command_runner.go:130] > GitTreeState:   clean
	I0610 11:19:04.122183   40730 command_runner.go:130] > BuildDate:      2024-06-06T15:30:03Z
	I0610 11:19:04.122189   40730 command_runner.go:130] > GoVersion:      go1.21.6
	I0610 11:19:04.122195   40730 command_runner.go:130] > Compiler:       gc
	I0610 11:19:04.122202   40730 command_runner.go:130] > Platform:       linux/amd64
	I0610 11:19:04.122211   40730 command_runner.go:130] > Linkmode:       dynamic
	I0610 11:19:04.122216   40730 command_runner.go:130] > BuildTags:      
	I0610 11:19:04.122221   40730 command_runner.go:130] >   containers_image_ostree_stub
	I0610 11:19:04.122226   40730 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0610 11:19:04.122229   40730 command_runner.go:130] >   btrfs_noversion
	I0610 11:19:04.122234   40730 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0610 11:19:04.122238   40730 command_runner.go:130] >   libdm_no_deferred_remove
	I0610 11:19:04.122244   40730 command_runner.go:130] >   seccomp
	I0610 11:19:04.122251   40730 command_runner.go:130] > LDFlags:          unknown
	I0610 11:19:04.122257   40730 command_runner.go:130] > SeccompEnabled:   true
	I0610 11:19:04.122267   40730 command_runner.go:130] > AppArmorEnabled:  false
	I0610 11:19:04.122331   40730 ssh_runner.go:195] Run: crio --version
	I0610 11:19:04.149089   40730 command_runner.go:130] > crio version 1.29.1
	I0610 11:19:04.149111   40730 command_runner.go:130] > Version:        1.29.1
	I0610 11:19:04.149120   40730 command_runner.go:130] > GitCommit:      unknown
	I0610 11:19:04.149126   40730 command_runner.go:130] > GitCommitDate:  unknown
	I0610 11:19:04.149132   40730 command_runner.go:130] > GitTreeState:   clean
	I0610 11:19:04.149143   40730 command_runner.go:130] > BuildDate:      2024-06-06T15:30:03Z
	I0610 11:19:04.149149   40730 command_runner.go:130] > GoVersion:      go1.21.6
	I0610 11:19:04.149155   40730 command_runner.go:130] > Compiler:       gc
	I0610 11:19:04.149161   40730 command_runner.go:130] > Platform:       linux/amd64
	I0610 11:19:04.149168   40730 command_runner.go:130] > Linkmode:       dynamic
	I0610 11:19:04.149177   40730 command_runner.go:130] > BuildTags:      
	I0610 11:19:04.149189   40730 command_runner.go:130] >   containers_image_ostree_stub
	I0610 11:19:04.149201   40730 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0610 11:19:04.149209   40730 command_runner.go:130] >   btrfs_noversion
	I0610 11:19:04.149218   40730 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0610 11:19:04.149229   40730 command_runner.go:130] >   libdm_no_deferred_remove
	I0610 11:19:04.149236   40730 command_runner.go:130] >   seccomp
	I0610 11:19:04.149252   40730 command_runner.go:130] > LDFlags:          unknown
	I0610 11:19:04.149259   40730 command_runner.go:130] > SeccompEnabled:   true
	I0610 11:19:04.149270   40730 command_runner.go:130] > AppArmorEnabled:  false
	I0610 11:19:04.152391   40730 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 11:19:04.154314   40730 main.go:141] libmachine: (multinode-862380) Calling .GetIP
	I0610 11:19:04.157326   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:19:04.157744   40730 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:19:04.157769   40730 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:19:04.157988   40730 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 11:19:04.163915   40730 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0610 11:19:04.164032   40730 kubeadm.go:877] updating cluster {Name:multinode-862380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-862380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:19:04.164182   40730 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:19:04.164340   40730 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:19:04.213931   40730 command_runner.go:130] > {
	I0610 11:19:04.213954   40730 command_runner.go:130] >   "images": [
	I0610 11:19:04.213960   40730 command_runner.go:130] >     {
	I0610 11:19:04.213971   40730 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0610 11:19:04.213977   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.213985   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0610 11:19:04.213991   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214002   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214014   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0610 11:19:04.214024   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0610 11:19:04.214031   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214039   40730 command_runner.go:130] >       "size": "65291810",
	I0610 11:19:04.214048   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.214055   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214065   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214075   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214081   40730 command_runner.go:130] >     },
	I0610 11:19:04.214087   40730 command_runner.go:130] >     {
	I0610 11:19:04.214098   40730 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0610 11:19:04.214108   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214117   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0610 11:19:04.214124   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214134   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214150   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0610 11:19:04.214163   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0610 11:19:04.214171   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214179   40730 command_runner.go:130] >       "size": "65908273",
	I0610 11:19:04.214186   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.214197   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214206   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214213   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214222   40730 command_runner.go:130] >     },
	I0610 11:19:04.214229   40730 command_runner.go:130] >     {
	I0610 11:19:04.214242   40730 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0610 11:19:04.214250   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214258   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0610 11:19:04.214265   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214275   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214289   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0610 11:19:04.214305   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0610 11:19:04.214313   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214322   40730 command_runner.go:130] >       "size": "1363676",
	I0610 11:19:04.214331   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.214338   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214347   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214354   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214363   40730 command_runner.go:130] >     },
	I0610 11:19:04.214369   40730 command_runner.go:130] >     {
	I0610 11:19:04.214383   40730 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0610 11:19:04.214393   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214403   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0610 11:19:04.214412   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214420   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214436   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0610 11:19:04.214457   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0610 11:19:04.214466   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214473   40730 command_runner.go:130] >       "size": "31470524",
	I0610 11:19:04.214483   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.214493   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214503   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214511   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214520   40730 command_runner.go:130] >     },
	I0610 11:19:04.214527   40730 command_runner.go:130] >     {
	I0610 11:19:04.214540   40730 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0610 11:19:04.214550   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214559   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0610 11:19:04.214568   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214576   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214592   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0610 11:19:04.214607   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0610 11:19:04.214616   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214624   40730 command_runner.go:130] >       "size": "61245718",
	I0610 11:19:04.214634   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.214645   40730 command_runner.go:130] >       "username": "nonroot",
	I0610 11:19:04.214655   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214664   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214670   40730 command_runner.go:130] >     },
	I0610 11:19:04.214679   40730 command_runner.go:130] >     {
	I0610 11:19:04.214691   40730 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0610 11:19:04.214700   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214708   40730 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0610 11:19:04.214716   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214723   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214739   40730 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0610 11:19:04.214755   40730 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0610 11:19:04.214764   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214773   40730 command_runner.go:130] >       "size": "150779692",
	I0610 11:19:04.214782   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.214789   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.214798   40730 command_runner.go:130] >       },
	I0610 11:19:04.214805   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214815   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214823   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214832   40730 command_runner.go:130] >     },
	I0610 11:19:04.214839   40730 command_runner.go:130] >     {
	I0610 11:19:04.214854   40730 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0610 11:19:04.214863   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.214872   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0610 11:19:04.214880   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214887   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.214903   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0610 11:19:04.214918   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0610 11:19:04.214927   40730 command_runner.go:130] >       ],
	I0610 11:19:04.214936   40730 command_runner.go:130] >       "size": "117601759",
	I0610 11:19:04.214944   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.214951   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.214959   40730 command_runner.go:130] >       },
	I0610 11:19:04.214967   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.214977   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.214985   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.214992   40730 command_runner.go:130] >     },
	I0610 11:19:04.215008   40730 command_runner.go:130] >     {
	I0610 11:19:04.215022   40730 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0610 11:19:04.215032   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.215043   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0610 11:19:04.215053   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215060   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.215088   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0610 11:19:04.215104   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0610 11:19:04.215110   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215117   40730 command_runner.go:130] >       "size": "112170310",
	I0610 11:19:04.215124   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.215134   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.215140   40730 command_runner.go:130] >       },
	I0610 11:19:04.215150   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.215155   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.215160   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.215165   40730 command_runner.go:130] >     },
	I0610 11:19:04.215169   40730 command_runner.go:130] >     {
	I0610 11:19:04.215177   40730 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0610 11:19:04.215183   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.215191   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0610 11:19:04.215197   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215204   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.215223   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0610 11:19:04.215235   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0610 11:19:04.215241   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215247   40730 command_runner.go:130] >       "size": "85933465",
	I0610 11:19:04.215254   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.215261   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.215268   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.215275   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.215280   40730 command_runner.go:130] >     },
	I0610 11:19:04.215286   40730 command_runner.go:130] >     {
	I0610 11:19:04.215296   40730 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0610 11:19:04.215306   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.215315   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0610 11:19:04.215323   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215331   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.215347   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0610 11:19:04.215363   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0610 11:19:04.215372   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215379   40730 command_runner.go:130] >       "size": "63026504",
	I0610 11:19:04.215388   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.215398   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.215406   40730 command_runner.go:130] >       },
	I0610 11:19:04.215413   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.215422   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.215430   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.215438   40730 command_runner.go:130] >     },
	I0610 11:19:04.215445   40730 command_runner.go:130] >     {
	I0610 11:19:04.215459   40730 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0610 11:19:04.215469   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.215480   40730 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0610 11:19:04.215488   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215496   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.215514   40730 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0610 11:19:04.215529   40730 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0610 11:19:04.215538   40730 command_runner.go:130] >       ],
	I0610 11:19:04.215546   40730 command_runner.go:130] >       "size": "750414",
	I0610 11:19:04.215554   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.215562   40730 command_runner.go:130] >         "value": "65535"
	I0610 11:19:04.215570   40730 command_runner.go:130] >       },
	I0610 11:19:04.215577   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.215585   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.215595   40730 command_runner.go:130] >       "pinned": true
	I0610 11:19:04.215602   40730 command_runner.go:130] >     }
	I0610 11:19:04.215611   40730 command_runner.go:130] >   ]
	I0610 11:19:04.215619   40730 command_runner.go:130] > }
	I0610 11:19:04.215801   40730 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 11:19:04.215814   40730 crio.go:433] Images already preloaded, skipping extraction
	I0610 11:19:04.215873   40730 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:19:04.249585   40730 command_runner.go:130] > {
	I0610 11:19:04.249608   40730 command_runner.go:130] >   "images": [
	I0610 11:19:04.249614   40730 command_runner.go:130] >     {
	I0610 11:19:04.249628   40730 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0610 11:19:04.249635   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.249648   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0610 11:19:04.249653   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249658   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.249669   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0610 11:19:04.249679   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0610 11:19:04.249687   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249697   40730 command_runner.go:130] >       "size": "65291810",
	I0610 11:19:04.249705   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.249713   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.249724   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.249734   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.249741   40730 command_runner.go:130] >     },
	I0610 11:19:04.249747   40730 command_runner.go:130] >     {
	I0610 11:19:04.249758   40730 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0610 11:19:04.249767   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.249776   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0610 11:19:04.249781   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249788   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.249800   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0610 11:19:04.249816   40730 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0610 11:19:04.249823   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249830   40730 command_runner.go:130] >       "size": "65908273",
	I0610 11:19:04.249842   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.249852   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.249861   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.249868   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.249874   40730 command_runner.go:130] >     },
	I0610 11:19:04.249880   40730 command_runner.go:130] >     {
	I0610 11:19:04.249892   40730 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0610 11:19:04.249900   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.249909   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0610 11:19:04.249916   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249924   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.249939   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0610 11:19:04.249955   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0610 11:19:04.249963   40730 command_runner.go:130] >       ],
	I0610 11:19:04.249971   40730 command_runner.go:130] >       "size": "1363676",
	I0610 11:19:04.249981   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.249990   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250020   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250029   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250035   40730 command_runner.go:130] >     },
	I0610 11:19:04.250040   40730 command_runner.go:130] >     {
	I0610 11:19:04.250051   40730 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0610 11:19:04.250061   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250070   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0610 11:19:04.250081   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250089   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250106   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0610 11:19:04.250132   40730 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0610 11:19:04.250141   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250149   40730 command_runner.go:130] >       "size": "31470524",
	I0610 11:19:04.250159   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.250169   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250176   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250186   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250192   40730 command_runner.go:130] >     },
	I0610 11:19:04.250199   40730 command_runner.go:130] >     {
	I0610 11:19:04.250212   40730 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0610 11:19:04.250222   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250232   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0610 11:19:04.250240   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250246   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250260   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0610 11:19:04.250276   40730 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0610 11:19:04.250284   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250292   40730 command_runner.go:130] >       "size": "61245718",
	I0610 11:19:04.250301   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.250310   40730 command_runner.go:130] >       "username": "nonroot",
	I0610 11:19:04.250321   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250331   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250337   40730 command_runner.go:130] >     },
	I0610 11:19:04.250345   40730 command_runner.go:130] >     {
	I0610 11:19:04.250356   40730 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0610 11:19:04.250367   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250379   40730 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0610 11:19:04.250388   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250396   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250411   40730 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0610 11:19:04.250426   40730 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0610 11:19:04.250434   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250442   40730 command_runner.go:130] >       "size": "150779692",
	I0610 11:19:04.250453   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.250463   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.250475   40730 command_runner.go:130] >       },
	I0610 11:19:04.250486   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250496   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250505   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250513   40730 command_runner.go:130] >     },
	I0610 11:19:04.250519   40730 command_runner.go:130] >     {
	I0610 11:19:04.250530   40730 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0610 11:19:04.250539   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250548   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0610 11:19:04.250557   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250567   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250583   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0610 11:19:04.250598   40730 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0610 11:19:04.250607   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250615   40730 command_runner.go:130] >       "size": "117601759",
	I0610 11:19:04.250625   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.250634   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.250641   40730 command_runner.go:130] >       },
	I0610 11:19:04.250651   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250658   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250668   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250677   40730 command_runner.go:130] >     },
	I0610 11:19:04.250683   40730 command_runner.go:130] >     {
	I0610 11:19:04.250696   40730 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0610 11:19:04.250706   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250717   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0610 11:19:04.250727   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250734   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250755   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0610 11:19:04.250771   40730 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0610 11:19:04.250781   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250790   40730 command_runner.go:130] >       "size": "112170310",
	I0610 11:19:04.250800   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.250808   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.250815   40730 command_runner.go:130] >       },
	I0610 11:19:04.250825   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250833   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250843   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.250849   40730 command_runner.go:130] >     },
	I0610 11:19:04.250857   40730 command_runner.go:130] >     {
	I0610 11:19:04.250868   40730 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0610 11:19:04.250878   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.250887   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0610 11:19:04.250896   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250903   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.250919   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0610 11:19:04.250938   40730 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0610 11:19:04.250948   40730 command_runner.go:130] >       ],
	I0610 11:19:04.250955   40730 command_runner.go:130] >       "size": "85933465",
	I0610 11:19:04.250964   40730 command_runner.go:130] >       "uid": null,
	I0610 11:19:04.250972   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.250981   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.250988   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.251001   40730 command_runner.go:130] >     },
	I0610 11:19:04.251011   40730 command_runner.go:130] >     {
	I0610 11:19:04.251022   40730 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0610 11:19:04.251032   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.251042   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0610 11:19:04.251051   40730 command_runner.go:130] >       ],
	I0610 11:19:04.251060   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.251075   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0610 11:19:04.251088   40730 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0610 11:19:04.251098   40730 command_runner.go:130] >       ],
	I0610 11:19:04.251106   40730 command_runner.go:130] >       "size": "63026504",
	I0610 11:19:04.251115   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.251123   40730 command_runner.go:130] >         "value": "0"
	I0610 11:19:04.251132   40730 command_runner.go:130] >       },
	I0610 11:19:04.251139   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.251148   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.251155   40730 command_runner.go:130] >       "pinned": false
	I0610 11:19:04.251162   40730 command_runner.go:130] >     },
	I0610 11:19:04.251172   40730 command_runner.go:130] >     {
	I0610 11:19:04.251182   40730 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0610 11:19:04.251192   40730 command_runner.go:130] >       "repoTags": [
	I0610 11:19:04.251204   40730 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0610 11:19:04.251212   40730 command_runner.go:130] >       ],
	I0610 11:19:04.251220   40730 command_runner.go:130] >       "repoDigests": [
	I0610 11:19:04.251235   40730 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0610 11:19:04.251249   40730 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0610 11:19:04.251256   40730 command_runner.go:130] >       ],
	I0610 11:19:04.251267   40730 command_runner.go:130] >       "size": "750414",
	I0610 11:19:04.251274   40730 command_runner.go:130] >       "uid": {
	I0610 11:19:04.251282   40730 command_runner.go:130] >         "value": "65535"
	I0610 11:19:04.251291   40730 command_runner.go:130] >       },
	I0610 11:19:04.251298   40730 command_runner.go:130] >       "username": "",
	I0610 11:19:04.251308   40730 command_runner.go:130] >       "spec": null,
	I0610 11:19:04.251318   40730 command_runner.go:130] >       "pinned": true
	I0610 11:19:04.251326   40730 command_runner.go:130] >     }
	I0610 11:19:04.251334   40730 command_runner.go:130] >   ]
	I0610 11:19:04.251341   40730 command_runner.go:130] > }
	I0610 11:19:04.251463   40730 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 11:19:04.251475   40730 cache_images.go:84] Images are preloaded, skipping loading
	I0610 11:19:04.251484   40730 kubeadm.go:928] updating node { 192.168.39.100 8443 v1.30.1 crio true true} ...
	I0610 11:19:04.251595   40730 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-862380 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-862380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:19:04.251675   40730 ssh_runner.go:195] Run: crio config
	I0610 11:19:04.284718   40730 command_runner.go:130] ! time="2024-06-10 11:19:04.267435288Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0610 11:19:04.290096   40730 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0610 11:19:04.296654   40730 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0610 11:19:04.296681   40730 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0610 11:19:04.296692   40730 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0610 11:19:04.296697   40730 command_runner.go:130] > #
	I0610 11:19:04.296707   40730 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0610 11:19:04.296721   40730 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0610 11:19:04.296728   40730 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0610 11:19:04.296738   40730 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0610 11:19:04.296745   40730 command_runner.go:130] > # reload'.
	I0610 11:19:04.296755   40730 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0610 11:19:04.296771   40730 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0610 11:19:04.296783   40730 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0610 11:19:04.296791   40730 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0610 11:19:04.296800   40730 command_runner.go:130] > [crio]
	I0610 11:19:04.296808   40730 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0610 11:19:04.296818   40730 command_runner.go:130] > # containers images, in this directory.
	I0610 11:19:04.296824   40730 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0610 11:19:04.296836   40730 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0610 11:19:04.296846   40730 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0610 11:19:04.296860   40730 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0610 11:19:04.296870   40730 command_runner.go:130] > # imagestore = ""
	I0610 11:19:04.296883   40730 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0610 11:19:04.296896   40730 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0610 11:19:04.296905   40730 command_runner.go:130] > storage_driver = "overlay"
	I0610 11:19:04.296911   40730 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0610 11:19:04.296920   40730 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0610 11:19:04.296937   40730 command_runner.go:130] > storage_option = [
	I0610 11:19:04.296959   40730 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0610 11:19:04.296964   40730 command_runner.go:130] > ]
	I0610 11:19:04.296974   40730 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0610 11:19:04.296984   40730 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0610 11:19:04.296992   40730 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0610 11:19:04.297004   40730 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0610 11:19:04.297012   40730 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0610 11:19:04.297020   40730 command_runner.go:130] > # always happen on a node reboot
	I0610 11:19:04.297024   40730 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0610 11:19:04.297037   40730 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0610 11:19:04.297045   40730 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0610 11:19:04.297050   40730 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0610 11:19:04.297057   40730 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0610 11:19:04.297067   40730 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0610 11:19:04.297077   40730 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0610 11:19:04.297083   40730 command_runner.go:130] > # internal_wipe = true
	I0610 11:19:04.297091   40730 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0610 11:19:04.297099   40730 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0610 11:19:04.297106   40730 command_runner.go:130] > # internal_repair = false
	I0610 11:19:04.297113   40730 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0610 11:19:04.297121   40730 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0610 11:19:04.297129   40730 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0610 11:19:04.297136   40730 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0610 11:19:04.297144   40730 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0610 11:19:04.297150   40730 command_runner.go:130] > [crio.api]
	I0610 11:19:04.297156   40730 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0610 11:19:04.297160   40730 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0610 11:19:04.297168   40730 command_runner.go:130] > # IP address on which the stream server will listen.
	I0610 11:19:04.297172   40730 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0610 11:19:04.297181   40730 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0610 11:19:04.297188   40730 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0610 11:19:04.297192   40730 command_runner.go:130] > # stream_port = "0"
	I0610 11:19:04.297200   40730 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0610 11:19:04.297205   40730 command_runner.go:130] > # stream_enable_tls = false
	I0610 11:19:04.297210   40730 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0610 11:19:04.297217   40730 command_runner.go:130] > # stream_idle_timeout = ""
	I0610 11:19:04.297235   40730 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0610 11:19:04.297248   40730 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0610 11:19:04.297258   40730 command_runner.go:130] > # minutes.
	I0610 11:19:04.297267   40730 command_runner.go:130] > # stream_tls_cert = ""
	I0610 11:19:04.297280   40730 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0610 11:19:04.297294   40730 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0610 11:19:04.297303   40730 command_runner.go:130] > # stream_tls_key = ""
	I0610 11:19:04.297313   40730 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0610 11:19:04.297321   40730 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0610 11:19:04.297338   40730 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0610 11:19:04.297345   40730 command_runner.go:130] > # stream_tls_ca = ""
	I0610 11:19:04.297352   40730 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0610 11:19:04.297359   40730 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0610 11:19:04.297366   40730 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0610 11:19:04.297373   40730 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0610 11:19:04.297379   40730 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0610 11:19:04.297387   40730 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0610 11:19:04.297394   40730 command_runner.go:130] > [crio.runtime]
	I0610 11:19:04.297400   40730 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0610 11:19:04.297408   40730 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0610 11:19:04.297415   40730 command_runner.go:130] > # "nofile=1024:2048"
	I0610 11:19:04.297421   40730 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0610 11:19:04.297427   40730 command_runner.go:130] > # default_ulimits = [
	I0610 11:19:04.297431   40730 command_runner.go:130] > # ]
	I0610 11:19:04.297436   40730 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0610 11:19:04.297443   40730 command_runner.go:130] > # no_pivot = false
	I0610 11:19:04.297448   40730 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0610 11:19:04.297456   40730 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0610 11:19:04.297463   40730 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0610 11:19:04.297470   40730 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0610 11:19:04.297477   40730 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0610 11:19:04.297483   40730 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0610 11:19:04.297490   40730 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0610 11:19:04.297494   40730 command_runner.go:130] > # Cgroup setting for conmon
	I0610 11:19:04.297501   40730 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0610 11:19:04.297507   40730 command_runner.go:130] > conmon_cgroup = "pod"
	I0610 11:19:04.297514   40730 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0610 11:19:04.297523   40730 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0610 11:19:04.297540   40730 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0610 11:19:04.297547   40730 command_runner.go:130] > conmon_env = [
	I0610 11:19:04.297553   40730 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0610 11:19:04.297558   40730 command_runner.go:130] > ]
	I0610 11:19:04.297563   40730 command_runner.go:130] > # Additional environment variables to set for all the
	I0610 11:19:04.297570   40730 command_runner.go:130] > # containers. These are overridden if set in the
	I0610 11:19:04.297576   40730 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0610 11:19:04.297583   40730 command_runner.go:130] > # default_env = [
	I0610 11:19:04.297590   40730 command_runner.go:130] > # ]
	I0610 11:19:04.297595   40730 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0610 11:19:04.297602   40730 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0610 11:19:04.297608   40730 command_runner.go:130] > # selinux = false
	I0610 11:19:04.297615   40730 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0610 11:19:04.297624   40730 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0610 11:19:04.297632   40730 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0610 11:19:04.297639   40730 command_runner.go:130] > # seccomp_profile = ""
	I0610 11:19:04.297645   40730 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0610 11:19:04.297654   40730 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0610 11:19:04.297662   40730 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0610 11:19:04.297669   40730 command_runner.go:130] > # which might increase security.
	I0610 11:19:04.297674   40730 command_runner.go:130] > # This option is currently deprecated,
	I0610 11:19:04.297682   40730 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0610 11:19:04.297689   40730 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0610 11:19:04.297695   40730 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0610 11:19:04.297703   40730 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0610 11:19:04.297711   40730 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0610 11:19:04.297719   40730 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0610 11:19:04.297726   40730 command_runner.go:130] > # This option supports live configuration reload.
	I0610 11:19:04.297730   40730 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0610 11:19:04.297738   40730 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0610 11:19:04.297745   40730 command_runner.go:130] > # the cgroup blockio controller.
	I0610 11:19:04.297749   40730 command_runner.go:130] > # blockio_config_file = ""
	I0610 11:19:04.297757   40730 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0610 11:19:04.297761   40730 command_runner.go:130] > # blockio parameters.
	I0610 11:19:04.297767   40730 command_runner.go:130] > # blockio_reload = false
	I0610 11:19:04.297774   40730 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0610 11:19:04.297780   40730 command_runner.go:130] > # irqbalance daemon.
	I0610 11:19:04.297785   40730 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0610 11:19:04.297795   40730 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0610 11:19:04.297805   40730 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0610 11:19:04.297813   40730 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0610 11:19:04.297821   40730 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0610 11:19:04.297829   40730 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0610 11:19:04.297834   40730 command_runner.go:130] > # This option supports live configuration reload.
	I0610 11:19:04.297841   40730 command_runner.go:130] > # rdt_config_file = ""
	I0610 11:19:04.297846   40730 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0610 11:19:04.297853   40730 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0610 11:19:04.297868   40730 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0610 11:19:04.297875   40730 command_runner.go:130] > # separate_pull_cgroup = ""
	I0610 11:19:04.297881   40730 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0610 11:19:04.297889   40730 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0610 11:19:04.297895   40730 command_runner.go:130] > # will be added.
	I0610 11:19:04.297899   40730 command_runner.go:130] > # default_capabilities = [
	I0610 11:19:04.297906   40730 command_runner.go:130] > # 	"CHOWN",
	I0610 11:19:04.297910   40730 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0610 11:19:04.297916   40730 command_runner.go:130] > # 	"FSETID",
	I0610 11:19:04.297920   40730 command_runner.go:130] > # 	"FOWNER",
	I0610 11:19:04.297924   40730 command_runner.go:130] > # 	"SETGID",
	I0610 11:19:04.297930   40730 command_runner.go:130] > # 	"SETUID",
	I0610 11:19:04.297934   40730 command_runner.go:130] > # 	"SETPCAP",
	I0610 11:19:04.297940   40730 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0610 11:19:04.297943   40730 command_runner.go:130] > # 	"KILL",
	I0610 11:19:04.297949   40730 command_runner.go:130] > # ]
	I0610 11:19:04.297956   40730 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0610 11:19:04.297965   40730 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0610 11:19:04.297971   40730 command_runner.go:130] > # add_inheritable_capabilities = false
	I0610 11:19:04.297979   40730 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0610 11:19:04.297989   40730 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0610 11:19:04.297995   40730 command_runner.go:130] > default_sysctls = [
	I0610 11:19:04.298000   40730 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0610 11:19:04.298003   40730 command_runner.go:130] > ]
	I0610 11:19:04.298010   40730 command_runner.go:130] > # List of devices on the host that a
	I0610 11:19:04.298016   40730 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0610 11:19:04.298023   40730 command_runner.go:130] > # allowed_devices = [
	I0610 11:19:04.298026   40730 command_runner.go:130] > # 	"/dev/fuse",
	I0610 11:19:04.298032   40730 command_runner.go:130] > # ]
	I0610 11:19:04.298036   40730 command_runner.go:130] > # List of additional devices. specified as
	I0610 11:19:04.298046   40730 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0610 11:19:04.298053   40730 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0610 11:19:04.298063   40730 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0610 11:19:04.298069   40730 command_runner.go:130] > # additional_devices = [
	I0610 11:19:04.298072   40730 command_runner.go:130] > # ]
	I0610 11:19:04.298080   40730 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0610 11:19:04.298084   40730 command_runner.go:130] > # cdi_spec_dirs = [
	I0610 11:19:04.298090   40730 command_runner.go:130] > # 	"/etc/cdi",
	I0610 11:19:04.298094   40730 command_runner.go:130] > # 	"/var/run/cdi",
	I0610 11:19:04.298099   40730 command_runner.go:130] > # ]
	I0610 11:19:04.298105   40730 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0610 11:19:04.298114   40730 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0610 11:19:04.298121   40730 command_runner.go:130] > # Defaults to false.
	I0610 11:19:04.298126   40730 command_runner.go:130] > # device_ownership_from_security_context = false
	I0610 11:19:04.298134   40730 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0610 11:19:04.298142   40730 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0610 11:19:04.298149   40730 command_runner.go:130] > # hooks_dir = [
	I0610 11:19:04.298153   40730 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0610 11:19:04.298158   40730 command_runner.go:130] > # ]
	I0610 11:19:04.298164   40730 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0610 11:19:04.298172   40730 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0610 11:19:04.298177   40730 command_runner.go:130] > # its default mounts from the following two files:
	I0610 11:19:04.298182   40730 command_runner.go:130] > #
	I0610 11:19:04.298188   40730 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0610 11:19:04.298197   40730 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0610 11:19:04.298204   40730 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0610 11:19:04.298207   40730 command_runner.go:130] > #
	I0610 11:19:04.298213   40730 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0610 11:19:04.298221   40730 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0610 11:19:04.298238   40730 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0610 11:19:04.298249   40730 command_runner.go:130] > #      only add mounts it finds in this file.
	I0610 11:19:04.298258   40730 command_runner.go:130] > #
	I0610 11:19:04.298264   40730 command_runner.go:130] > # default_mounts_file = ""
	I0610 11:19:04.298276   40730 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0610 11:19:04.298289   40730 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0610 11:19:04.298299   40730 command_runner.go:130] > pids_limit = 1024
	I0610 11:19:04.298309   40730 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0610 11:19:04.298317   40730 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0610 11:19:04.298326   40730 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0610 11:19:04.298336   40730 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0610 11:19:04.298342   40730 command_runner.go:130] > # log_size_max = -1
	I0610 11:19:04.298349   40730 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0610 11:19:04.298358   40730 command_runner.go:130] > # log_to_journald = false
	I0610 11:19:04.298366   40730 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0610 11:19:04.298374   40730 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0610 11:19:04.298382   40730 command_runner.go:130] > # Path to directory for container attach sockets.
	I0610 11:19:04.298387   40730 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0610 11:19:04.298395   40730 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0610 11:19:04.298400   40730 command_runner.go:130] > # bind_mount_prefix = ""
	I0610 11:19:04.298408   40730 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0610 11:19:04.298414   40730 command_runner.go:130] > # read_only = false
	I0610 11:19:04.298420   40730 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0610 11:19:04.298428   40730 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0610 11:19:04.298432   40730 command_runner.go:130] > # live configuration reload.
	I0610 11:19:04.298439   40730 command_runner.go:130] > # log_level = "info"
	I0610 11:19:04.298444   40730 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0610 11:19:04.298451   40730 command_runner.go:130] > # This option supports live configuration reload.
	I0610 11:19:04.298455   40730 command_runner.go:130] > # log_filter = ""
	I0610 11:19:04.298464   40730 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0610 11:19:04.298474   40730 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0610 11:19:04.298480   40730 command_runner.go:130] > # separated by comma.
	I0610 11:19:04.298488   40730 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0610 11:19:04.298494   40730 command_runner.go:130] > # uid_mappings = ""
	I0610 11:19:04.298502   40730 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0610 11:19:04.298510   40730 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0610 11:19:04.298516   40730 command_runner.go:130] > # separated by comma.
	I0610 11:19:04.298524   40730 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0610 11:19:04.298530   40730 command_runner.go:130] > # gid_mappings = ""
	I0610 11:19:04.298536   40730 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0610 11:19:04.298544   40730 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0610 11:19:04.298552   40730 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0610 11:19:04.298563   40730 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0610 11:19:04.298569   40730 command_runner.go:130] > # minimum_mappable_uid = -1
	I0610 11:19:04.298576   40730 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0610 11:19:04.298584   40730 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0610 11:19:04.298592   40730 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0610 11:19:04.298599   40730 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0610 11:19:04.298608   40730 command_runner.go:130] > # minimum_mappable_gid = -1
	I0610 11:19:04.298616   40730 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0610 11:19:04.298624   40730 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0610 11:19:04.298631   40730 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0610 11:19:04.298637   40730 command_runner.go:130] > # ctr_stop_timeout = 30
	I0610 11:19:04.298643   40730 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0610 11:19:04.298651   40730 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0610 11:19:04.298659   40730 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0610 11:19:04.298664   40730 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0610 11:19:04.298671   40730 command_runner.go:130] > drop_infra_ctr = false
	I0610 11:19:04.298677   40730 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0610 11:19:04.298684   40730 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0610 11:19:04.298694   40730 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0610 11:19:04.298700   40730 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0610 11:19:04.298707   40730 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0610 11:19:04.298715   40730 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0610 11:19:04.298721   40730 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0610 11:19:04.298728   40730 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0610 11:19:04.298732   40730 command_runner.go:130] > # shared_cpuset = ""
	I0610 11:19:04.298741   40730 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0610 11:19:04.298747   40730 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0610 11:19:04.298751   40730 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0610 11:19:04.298760   40730 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0610 11:19:04.298767   40730 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0610 11:19:04.298772   40730 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0610 11:19:04.298780   40730 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0610 11:19:04.298785   40730 command_runner.go:130] > # enable_criu_support = false
	I0610 11:19:04.298790   40730 command_runner.go:130] > # Enable/disable the generation of the container,
	I0610 11:19:04.298798   40730 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0610 11:19:04.298804   40730 command_runner.go:130] > # enable_pod_events = false
	I0610 11:19:04.298811   40730 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0610 11:19:04.298819   40730 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0610 11:19:04.298826   40730 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0610 11:19:04.298830   40730 command_runner.go:130] > # default_runtime = "runc"
	I0610 11:19:04.298838   40730 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0610 11:19:04.298845   40730 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0610 11:19:04.298856   40730 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0610 11:19:04.298866   40730 command_runner.go:130] > # creation as a file is not desired either.
	I0610 11:19:04.298876   40730 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0610 11:19:04.298883   40730 command_runner.go:130] > # the hostname is being managed dynamically.
	I0610 11:19:04.298887   40730 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0610 11:19:04.298893   40730 command_runner.go:130] > # ]
	I0610 11:19:04.298899   40730 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0610 11:19:04.298917   40730 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0610 11:19:04.298925   40730 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0610 11:19:04.298932   40730 command_runner.go:130] > # Each entry in the table should follow the format:
	I0610 11:19:04.298936   40730 command_runner.go:130] > #
	I0610 11:19:04.298944   40730 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0610 11:19:04.298948   40730 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0610 11:19:04.298969   40730 command_runner.go:130] > # runtime_type = "oci"
	I0610 11:19:04.298976   40730 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0610 11:19:04.298985   40730 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0610 11:19:04.298989   40730 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0610 11:19:04.298994   40730 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0610 11:19:04.298998   40730 command_runner.go:130] > # monitor_env = []
	I0610 11:19:04.299003   40730 command_runner.go:130] > # privileged_without_host_devices = false
	I0610 11:19:04.299009   40730 command_runner.go:130] > # allowed_annotations = []
	I0610 11:19:04.299015   40730 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0610 11:19:04.299020   40730 command_runner.go:130] > # Where:
	I0610 11:19:04.299026   40730 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0610 11:19:04.299034   40730 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0610 11:19:04.299041   40730 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0610 11:19:04.299049   40730 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0610 11:19:04.299056   40730 command_runner.go:130] > #   in $PATH.
	I0610 11:19:04.299062   40730 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0610 11:19:04.299069   40730 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0610 11:19:04.299074   40730 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0610 11:19:04.299080   40730 command_runner.go:130] > #   state.
	I0610 11:19:04.299086   40730 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0610 11:19:04.299094   40730 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0610 11:19:04.299102   40730 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0610 11:19:04.299110   40730 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0610 11:19:04.299115   40730 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0610 11:19:04.299124   40730 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0610 11:19:04.299133   40730 command_runner.go:130] > #   The currently recognized values are:
	I0610 11:19:04.299141   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0610 11:19:04.299151   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0610 11:19:04.299159   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0610 11:19:04.299165   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0610 11:19:04.299175   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0610 11:19:04.299184   40730 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0610 11:19:04.299193   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0610 11:19:04.299201   40730 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0610 11:19:04.299209   40730 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0610 11:19:04.299218   40730 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0610 11:19:04.299229   40730 command_runner.go:130] > #   deprecated option "conmon".
	I0610 11:19:04.299243   40730 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0610 11:19:04.299254   40730 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0610 11:19:04.299268   40730 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0610 11:19:04.299279   40730 command_runner.go:130] > #   should be moved to the container's cgroup
	I0610 11:19:04.299292   40730 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0610 11:19:04.299302   40730 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0610 11:19:04.299311   40730 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0610 11:19:04.299318   40730 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0610 11:19:04.299322   40730 command_runner.go:130] > #
	I0610 11:19:04.299327   40730 command_runner.go:130] > # Using the seccomp notifier feature:
	I0610 11:19:04.299331   40730 command_runner.go:130] > #
	I0610 11:19:04.299337   40730 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0610 11:19:04.299346   40730 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0610 11:19:04.299351   40730 command_runner.go:130] > #
	I0610 11:19:04.299357   40730 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0610 11:19:04.299365   40730 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0610 11:19:04.299370   40730 command_runner.go:130] > #
	I0610 11:19:04.299376   40730 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0610 11:19:04.299380   40730 command_runner.go:130] > # feature.
	I0610 11:19:04.299384   40730 command_runner.go:130] > #
	I0610 11:19:04.299392   40730 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0610 11:19:04.299401   40730 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0610 11:19:04.299410   40730 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0610 11:19:04.299421   40730 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0610 11:19:04.299429   40730 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0610 11:19:04.299435   40730 command_runner.go:130] > #
	I0610 11:19:04.299441   40730 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0610 11:19:04.299450   40730 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0610 11:19:04.299456   40730 command_runner.go:130] > #
	I0610 11:19:04.299463   40730 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0610 11:19:04.299471   40730 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0610 11:19:04.299474   40730 command_runner.go:130] > #
	I0610 11:19:04.299482   40730 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0610 11:19:04.299491   40730 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0610 11:19:04.299497   40730 command_runner.go:130] > # limitation.
	I0610 11:19:04.299504   40730 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0610 11:19:04.299510   40730 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0610 11:19:04.299514   40730 command_runner.go:130] > runtime_type = "oci"
	I0610 11:19:04.299521   40730 command_runner.go:130] > runtime_root = "/run/runc"
	I0610 11:19:04.299526   40730 command_runner.go:130] > runtime_config_path = ""
	I0610 11:19:04.299533   40730 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0610 11:19:04.299537   40730 command_runner.go:130] > monitor_cgroup = "pod"
	I0610 11:19:04.299543   40730 command_runner.go:130] > monitor_exec_cgroup = ""
	I0610 11:19:04.299547   40730 command_runner.go:130] > monitor_env = [
	I0610 11:19:04.299554   40730 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0610 11:19:04.299561   40730 command_runner.go:130] > ]
	I0610 11:19:04.299565   40730 command_runner.go:130] > privileged_without_host_devices = false
	I0610 11:19:04.299574   40730 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0610 11:19:04.299581   40730 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0610 11:19:04.299588   40730 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0610 11:19:04.299597   40730 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0610 11:19:04.299607   40730 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0610 11:19:04.299615   40730 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0610 11:19:04.299626   40730 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0610 11:19:04.299636   40730 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0610 11:19:04.299641   40730 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0610 11:19:04.299647   40730 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0610 11:19:04.299650   40730 command_runner.go:130] > # Example:
	I0610 11:19:04.299655   40730 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0610 11:19:04.299659   40730 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0610 11:19:04.299666   40730 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0610 11:19:04.299671   40730 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0610 11:19:04.299674   40730 command_runner.go:130] > # cpuset = 0
	I0610 11:19:04.299678   40730 command_runner.go:130] > # cpushares = "0-1"
	I0610 11:19:04.299681   40730 command_runner.go:130] > # Where:
	I0610 11:19:04.299686   40730 command_runner.go:130] > # The workload name is workload-type.
	I0610 11:19:04.299692   40730 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0610 11:19:04.299697   40730 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0610 11:19:04.299702   40730 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0610 11:19:04.299709   40730 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0610 11:19:04.299715   40730 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0610 11:19:04.299719   40730 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0610 11:19:04.299725   40730 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0610 11:19:04.299729   40730 command_runner.go:130] > # Default value is set to true
	I0610 11:19:04.299733   40730 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0610 11:19:04.299738   40730 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0610 11:19:04.299742   40730 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0610 11:19:04.299746   40730 command_runner.go:130] > # Default value is set to 'false'
	I0610 11:19:04.299750   40730 command_runner.go:130] > # disable_hostport_mapping = false
	I0610 11:19:04.299756   40730 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0610 11:19:04.299758   40730 command_runner.go:130] > #
	I0610 11:19:04.299763   40730 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0610 11:19:04.299769   40730 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0610 11:19:04.299775   40730 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0610 11:19:04.299781   40730 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0610 11:19:04.299786   40730 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0610 11:19:04.299790   40730 command_runner.go:130] > [crio.image]
	I0610 11:19:04.299795   40730 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0610 11:19:04.299799   40730 command_runner.go:130] > # default_transport = "docker://"
	I0610 11:19:04.299808   40730 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0610 11:19:04.299814   40730 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0610 11:19:04.299820   40730 command_runner.go:130] > # global_auth_file = ""
	I0610 11:19:04.299825   40730 command_runner.go:130] > # The image used to instantiate infra containers.
	I0610 11:19:04.299833   40730 command_runner.go:130] > # This option supports live configuration reload.
	I0610 11:19:04.299837   40730 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0610 11:19:04.299846   40730 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0610 11:19:04.299854   40730 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0610 11:19:04.299860   40730 command_runner.go:130] > # This option supports live configuration reload.
	I0610 11:19:04.299869   40730 command_runner.go:130] > # pause_image_auth_file = ""
	I0610 11:19:04.299877   40730 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0610 11:19:04.299885   40730 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0610 11:19:04.299894   40730 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0610 11:19:04.299900   40730 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0610 11:19:04.299907   40730 command_runner.go:130] > # pause_command = "/pause"
	I0610 11:19:04.299913   40730 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0610 11:19:04.299921   40730 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0610 11:19:04.299929   40730 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0610 11:19:04.299938   40730 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0610 11:19:04.299946   40730 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0610 11:19:04.299952   40730 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0610 11:19:04.299958   40730 command_runner.go:130] > # pinned_images = [
	I0610 11:19:04.299961   40730 command_runner.go:130] > # ]
	I0610 11:19:04.299969   40730 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0610 11:19:04.299980   40730 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0610 11:19:04.299988   40730 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0610 11:19:04.299997   40730 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0610 11:19:04.300002   40730 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0610 11:19:04.300006   40730 command_runner.go:130] > # signature_policy = ""
	I0610 11:19:04.300012   40730 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0610 11:19:04.300020   40730 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0610 11:19:04.300027   40730 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0610 11:19:04.300035   40730 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0610 11:19:04.300043   40730 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0610 11:19:04.300049   40730 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0610 11:19:04.300055   40730 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0610 11:19:04.300064   40730 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0610 11:19:04.300070   40730 command_runner.go:130] > # changing them here.
	I0610 11:19:04.300074   40730 command_runner.go:130] > # insecure_registries = [
	I0610 11:19:04.300078   40730 command_runner.go:130] > # ]
	I0610 11:19:04.300084   40730 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0610 11:19:04.300091   40730 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0610 11:19:04.300095   40730 command_runner.go:130] > # image_volumes = "mkdir"
	I0610 11:19:04.300103   40730 command_runner.go:130] > # Temporary directory to use for storing big files
	I0610 11:19:04.300110   40730 command_runner.go:130] > # big_files_temporary_dir = ""
	I0610 11:19:04.300119   40730 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0610 11:19:04.300125   40730 command_runner.go:130] > # CNI plugins.
	I0610 11:19:04.300128   40730 command_runner.go:130] > [crio.network]
	I0610 11:19:04.300137   40730 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0610 11:19:04.300145   40730 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0610 11:19:04.300149   40730 command_runner.go:130] > # cni_default_network = ""
	I0610 11:19:04.300157   40730 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0610 11:19:04.300162   40730 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0610 11:19:04.300169   40730 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0610 11:19:04.300175   40730 command_runner.go:130] > # plugin_dirs = [
	I0610 11:19:04.300178   40730 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0610 11:19:04.300182   40730 command_runner.go:130] > # ]
	I0610 11:19:04.300188   40730 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0610 11:19:04.300194   40730 command_runner.go:130] > [crio.metrics]
	I0610 11:19:04.300199   40730 command_runner.go:130] > # Globally enable or disable metrics support.
	I0610 11:19:04.300205   40730 command_runner.go:130] > enable_metrics = true
	I0610 11:19:04.300210   40730 command_runner.go:130] > # Specify enabled metrics collectors.
	I0610 11:19:04.300217   40730 command_runner.go:130] > # Per default all metrics are enabled.
	I0610 11:19:04.300223   40730 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0610 11:19:04.300237   40730 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0610 11:19:04.300250   40730 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0610 11:19:04.300259   40730 command_runner.go:130] > # metrics_collectors = [
	I0610 11:19:04.300268   40730 command_runner.go:130] > # 	"operations",
	I0610 11:19:04.300279   40730 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0610 11:19:04.300290   40730 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0610 11:19:04.300300   40730 command_runner.go:130] > # 	"operations_errors",
	I0610 11:19:04.300309   40730 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0610 11:19:04.300318   40730 command_runner.go:130] > # 	"image_pulls_by_name",
	I0610 11:19:04.300327   40730 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0610 11:19:04.300337   40730 command_runner.go:130] > # 	"image_pulls_failures",
	I0610 11:19:04.300347   40730 command_runner.go:130] > # 	"image_pulls_successes",
	I0610 11:19:04.300357   40730 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0610 11:19:04.300365   40730 command_runner.go:130] > # 	"image_layer_reuse",
	I0610 11:19:04.300372   40730 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0610 11:19:04.300377   40730 command_runner.go:130] > # 	"containers_oom_total",
	I0610 11:19:04.300383   40730 command_runner.go:130] > # 	"containers_oom",
	I0610 11:19:04.300387   40730 command_runner.go:130] > # 	"processes_defunct",
	I0610 11:19:04.300393   40730 command_runner.go:130] > # 	"operations_total",
	I0610 11:19:04.300398   40730 command_runner.go:130] > # 	"operations_latency_seconds",
	I0610 11:19:04.300404   40730 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0610 11:19:04.300411   40730 command_runner.go:130] > # 	"operations_errors_total",
	I0610 11:19:04.300415   40730 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0610 11:19:04.300422   40730 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0610 11:19:04.300426   40730 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0610 11:19:04.300431   40730 command_runner.go:130] > # 	"image_pulls_success_total",
	I0610 11:19:04.300441   40730 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0610 11:19:04.300447   40730 command_runner.go:130] > # 	"containers_oom_count_total",
	I0610 11:19:04.300453   40730 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0610 11:19:04.300459   40730 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0610 11:19:04.300463   40730 command_runner.go:130] > # ]
	I0610 11:19:04.300468   40730 command_runner.go:130] > # The port on which the metrics server will listen.
	I0610 11:19:04.300474   40730 command_runner.go:130] > # metrics_port = 9090
	I0610 11:19:04.300480   40730 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0610 11:19:04.300487   40730 command_runner.go:130] > # metrics_socket = ""
	I0610 11:19:04.300492   40730 command_runner.go:130] > # The certificate for the secure metrics server.
	I0610 11:19:04.300500   40730 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0610 11:19:04.300509   40730 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0610 11:19:04.300514   40730 command_runner.go:130] > # certificate on any modification event.
	I0610 11:19:04.300519   40730 command_runner.go:130] > # metrics_cert = ""
	I0610 11:19:04.300524   40730 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0610 11:19:04.300531   40730 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0610 11:19:04.300535   40730 command_runner.go:130] > # metrics_key = ""
	I0610 11:19:04.300604   40730 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0610 11:19:04.300608   40730 command_runner.go:130] > [crio.tracing]
	I0610 11:19:04.300613   40730 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0610 11:19:04.300617   40730 command_runner.go:130] > # enable_tracing = false
	I0610 11:19:04.300622   40730 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0610 11:19:04.300629   40730 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0610 11:19:04.300637   40730 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0610 11:19:04.300644   40730 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0610 11:19:04.300648   40730 command_runner.go:130] > # CRI-O NRI configuration.
	I0610 11:19:04.300654   40730 command_runner.go:130] > [crio.nri]
	I0610 11:19:04.300658   40730 command_runner.go:130] > # Globally enable or disable NRI.
	I0610 11:19:04.300664   40730 command_runner.go:130] > # enable_nri = false
	I0610 11:19:04.300668   40730 command_runner.go:130] > # NRI socket to listen on.
	I0610 11:19:04.300677   40730 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0610 11:19:04.300684   40730 command_runner.go:130] > # NRI plugin directory to use.
	I0610 11:19:04.300689   40730 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0610 11:19:04.300697   40730 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0610 11:19:04.300705   40730 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0610 11:19:04.300710   40730 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0610 11:19:04.300717   40730 command_runner.go:130] > # nri_disable_connections = false
	I0610 11:19:04.300722   40730 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0610 11:19:04.300729   40730 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0610 11:19:04.300734   40730 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0610 11:19:04.300741   40730 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0610 11:19:04.300747   40730 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0610 11:19:04.300753   40730 command_runner.go:130] > [crio.stats]
	I0610 11:19:04.300761   40730 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0610 11:19:04.300769   40730 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0610 11:19:04.300774   40730 command_runner.go:130] > # stats_collection_period = 0
	I0610 11:19:04.300874   40730 cni.go:84] Creating CNI manager for ""
	I0610 11:19:04.300882   40730 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0610 11:19:04.300890   40730 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:19:04.300910   40730 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-862380 NodeName:multinode-862380 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 11:19:04.301050   40730 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-862380"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:19:04.301114   40730 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:19:04.311624   40730 command_runner.go:130] > kubeadm
	I0610 11:19:04.311646   40730 command_runner.go:130] > kubectl
	I0610 11:19:04.311650   40730 command_runner.go:130] > kubelet
	I0610 11:19:04.311670   40730 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:19:04.311716   40730 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 11:19:04.320820   40730 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0610 11:19:04.336935   40730 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:19:04.352581   40730 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0610 11:19:04.368500   40730 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I0610 11:19:04.372243   40730 command_runner.go:130] > 192.168.39.100	control-plane.minikube.internal
	I0610 11:19:04.372319   40730 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:19:04.514176   40730 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:19:04.528402   40730 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380 for IP: 192.168.39.100
	I0610 11:19:04.528426   40730 certs.go:194] generating shared ca certs ...
	I0610 11:19:04.528446   40730 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:19:04.528641   40730 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 11:19:04.528684   40730 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 11:19:04.528694   40730 certs.go:256] generating profile certs ...
	I0610 11:19:04.528831   40730 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/client.key
	I0610 11:19:04.528912   40730 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/apiserver.key.a2475a71
	I0610 11:19:04.529014   40730 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/proxy-client.key
	I0610 11:19:04.529029   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 11:19:04.529052   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 11:19:04.529071   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 11:19:04.529088   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 11:19:04.529104   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 11:19:04.529122   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 11:19:04.529138   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 11:19:04.529156   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 11:19:04.529232   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 11:19:04.529273   40730 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 11:19:04.529286   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 11:19:04.529315   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 11:19:04.529346   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 11:19:04.529380   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 11:19:04.529430   40730 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:19:04.529467   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:19:04.529487   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem -> /usr/share/ca-certificates/10758.pem
	I0610 11:19:04.529504   40730 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> /usr/share/ca-certificates/107582.pem
	I0610 11:19:04.530151   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:19:04.554473   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:19:04.577097   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:19:04.599483   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 11:19:04.622193   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0610 11:19:04.644912   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 11:19:04.668405   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:19:04.691454   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/multinode-862380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 11:19:04.714875   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:19:04.738286   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 11:19:04.762284   40730 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 11:19:04.785548   40730 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:19:04.801895   40730 ssh_runner.go:195] Run: openssl version
	I0610 11:19:04.807696   40730 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0610 11:19:04.807844   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:19:04.819442   40730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:19:04.823982   40730 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:19:04.824009   40730 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:19:04.824055   40730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:19:04.829638   40730 command_runner.go:130] > b5213941
	I0610 11:19:04.829736   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:19:04.838894   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 11:19:04.849775   40730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 11:19:04.854638   40730 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:19:04.854691   40730 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:19:04.854739   40730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 11:19:04.860242   40730 command_runner.go:130] > 51391683
	I0610 11:19:04.860303   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 11:19:04.869487   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 11:19:04.879854   40730 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 11:19:04.884054   40730 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:19:04.884087   40730 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:19:04.884135   40730 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 11:19:04.889287   40730 command_runner.go:130] > 3ec20f2e
	I0610 11:19:04.889411   40730 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:19:04.898381   40730 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:19:04.902534   40730 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:19:04.902553   40730 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0610 11:19:04.902559   40730 command_runner.go:130] > Device: 253,1	Inode: 7339542     Links: 1
	I0610 11:19:04.902565   40730 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0610 11:19:04.902571   40730 command_runner.go:130] > Access: 2024-06-10 11:12:55.236834982 +0000
	I0610 11:19:04.902577   40730 command_runner.go:130] > Modify: 2024-06-10 11:12:55.236834982 +0000
	I0610 11:19:04.902582   40730 command_runner.go:130] > Change: 2024-06-10 11:12:55.236834982 +0000
	I0610 11:19:04.902593   40730 command_runner.go:130] >  Birth: 2024-06-10 11:12:55.236834982 +0000
	I0610 11:19:04.902697   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 11:19:04.908012   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.908080   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 11:19:04.919510   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.919571   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 11:19:04.932368   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.932738   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 11:19:04.940387   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.940452   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 11:19:04.945839   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.945906   40730 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 11:19:04.951354   40730 command_runner.go:130] > Certificate will not expire
	I0610 11:19:04.951493   40730 kubeadm.go:391] StartCluster: {Name:multinode-862380 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-862380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:19:04.951640   40730 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 11:19:04.951709   40730 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:19:04.993813   40730 command_runner.go:130] > e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90
	I0610 11:19:04.993846   40730 command_runner.go:130] > b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f
	I0610 11:19:04.993856   40730 command_runner.go:130] > f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc
	I0610 11:19:04.993866   40730 command_runner.go:130] > d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c
	I0610 11:19:04.993876   40730 command_runner.go:130] > 9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f
	I0610 11:19:04.993885   40730 command_runner.go:130] > e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976
	I0610 11:19:04.993894   40730 command_runner.go:130] > 58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266
	I0610 11:19:04.993904   40730 command_runner.go:130] > 4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb
	I0610 11:19:04.993935   40730 cri.go:89] found id: "e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90"
	I0610 11:19:04.993947   40730 cri.go:89] found id: "b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f"
	I0610 11:19:04.993953   40730 cri.go:89] found id: "f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc"
	I0610 11:19:04.993960   40730 cri.go:89] found id: "d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c"
	I0610 11:19:04.993965   40730 cri.go:89] found id: "9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f"
	I0610 11:19:04.993972   40730 cri.go:89] found id: "e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976"
	I0610 11:19:04.993976   40730 cri.go:89] found id: "58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266"
	I0610 11:19:04.993981   40730 cri.go:89] found id: "4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb"
	I0610 11:19:04.993986   40730 cri.go:89] found id: ""
	I0610 11:19:04.994036   40730 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.629515676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d37884bc-38eb-47bf-a74c-5b334eeaf5ce name=/runtime.v1.RuntimeService/Version
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.630763457Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44a1a371-38b2-4f48-8521-320a21592e4c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.631167028Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718018572631145117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44a1a371-38b2-4f48-8521-320a21592e4c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.631706247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03c34187-d976-4b2e-95b6-aefeaf9bf8b9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.631775277Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03c34187-d976-4b2e-95b6-aefeaf9bf8b9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.632127305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fc79e4de71d445e66c76ebc879593d2599c2c77229107f2a96a78737d49d6e,PodSandboxId:1daffe5524d188139839a6b1b96ad5ca5edfb98a6eff8bb442212a5c47d51c59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718018385981027011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d239e403b99cbc595846d609ca3877c0378cd522cc51a4ef8e62481693d5022,PodSandboxId:fe929d942cc9e63e145c553e0aa9f5268b3af05b033b39c69c2f4bf196375602,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718018352558458161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8e67a6d2840c27a7e9918a80a0a0c785dc7b6d2bd90a358d542bc6a1aabe74,PodSandboxId:abfa9aa50974623da5a50a69184494c217cf08dbc6007db84d76e812590ddb52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718018352479657098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fe
db5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55fbec1ed1f5c35125219a44fd079a722d49d9d8cbdb2455f8a70f01da71ed4e,PodSandboxId:4517c9efbd8541d8d1d37f445a576a5f35bb0182780f23bc213b682f1e16ae21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718018352360363094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]
string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a3bf0e596a0cca6f9831fcb9b458d5e853147197c42b8d6060f07e94f173f5,PodSandboxId:cdcc0f30f293274460a437197df073c4e406ed920aab513665fb6c4a8b4d8b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718018352319151234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e188af5e7614aace4ffe7147aadf26b4ae34f2212f99727a96e4a432272564dc,PodSandboxId:87bf102e2b2943355dabc72d3e0980da5c49276950d1ad4b2fc9c2f1f768e8e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718018347445798005,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dceb31898cc620cff1b69f4b915cc293db2955ad4fdfa09aaf24f4ba57bde1,PodSandboxId:dcd8d5c9c8cc1d7f6550cc6d27b429fa8028411f6868b679a6883186ce6898e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718018347411124949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5
e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50310344784fc7f085c0a0d226fde85f9b838c4bcfeaafbde1cf90adf4432aee,PodSandboxId:1a238893e319e44879cd357493747cefc3bd8860f007d2383c98f0d686678db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718018347413342565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5971ca1a108b34acbf6ae63f70db7b15d696e6cd577d1f3356a2b6661bb028d8,PodSandboxId:0b2ba625d3d8f5417652f5e20ac755f7fd3a72975d10e8ac6dd75ff553730dae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718018347339762997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc5324b6db46ad8a78594835c98c73f0f42d1c87636abde9b15fb4cbd4d2151,PodSandboxId:cfbc0a4db39045ee382b6a54d8d5f5da4410877bfde75f2ee86af08cede879e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718018047623152578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90,PodSandboxId:024549fd085df2c3f26e3b57056e36220f606174179776d0ec5517d7ab213ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718018002906701577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f,PodSandboxId:41beb7220db38d30d9a9e09ec9c7a266465505827ab8beb5023e3e210a3baa7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718018002842630237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc,PodSandboxId:47791e1db12ccb5a3125bf15245a19e55a3ce586fd87ad323ea1f816731386b1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718018001431173205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c,PodSandboxId:a2c6585397cfe84addb16de8bb37037463d7253e6320d81daa859502341f8f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718017997985587185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976,PodSandboxId:c88f109c2c83a6337b70493edeaa6bdda09624f9dbef45778d2ef091c19aeac1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718017978705276425,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702
,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f,PodSandboxId:dc44bfa9ee46200e44408345aa810713cfebf553e56e6a32f65ec6bd305edeb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718017978724535495,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:
map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266,PodSandboxId:10c8e06b75105c6690ee540a76a09dcc7cc12fcbdf5b36d4eb25ead4778cc4c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017978654023906,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb,PodSandboxId:04f27b50f52704344dd889054f4cf6da33cebd323a5db935ef89eb4abe78ffe8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017978635289553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03c34187-d976-4b2e-95b6-aefeaf9bf8b9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.634563064Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=ae0cb269-7ca1-4455-b699-8b4800d0dafe name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.635158615Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1daffe5524d188139839a6b1b96ad5ca5edfb98a6eff8bb442212a5c47d51c59,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-jx8f9,Uid:237e1205-8c4b-4234-ad0f-80e35f097827,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018385854834505,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:19:11.726779473Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fe929d942cc9e63e145c553e0aa9f5268b3af05b033b39c69c2f4bf196375602,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-vfxw9,Uid:56f70aa4-9ef6-4257-86b3-4fd0968b2e37,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1718018352136052520,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:19:11.726791174Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4517c9efbd8541d8d1d37f445a576a5f35bb0182780f23bc213b682f1e16ae21,Metadata:&PodSandboxMetadata{Name:kube-proxy-gghfj,Uid:d6793da8-f52b-488b-a0ec-88cbf6460c13,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018352110718304,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{
kubernetes.io/config.seen: 2024-06-10T11:19:11.726796187Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:abfa9aa50974623da5a50a69184494c217cf08dbc6007db84d76e812590ddb52,Metadata:&PodSandboxMetadata{Name:kindnet-bnpjz,Uid:6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018352080293053,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:19:11.726785462Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cdcc0f30f293274460a437197df073c4e406ed920aab513665fb6c4a8b4d8b15,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7966a309-dca2-488e-b683-0ff37fa01fe3,Namespace:kube-system,Attempt:1,},State
:SANDBOX_READY,CreatedAt:1718018352064889047,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp
\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-10T11:19:11.726789870Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:87bf102e2b2943355dabc72d3e0980da5c49276950d1ad4b2fc9c2f1f768e8e0,Metadata:&PodSandboxMetadata{Name:etcd-multinode-862380,Uid:134cbc49aee8e613a34fe93b9347c702,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018347207724809,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.100:2379,kubernetes.io/config.hash: 134cbc49aee8e613a34fe93b9347c702,kubernetes.io/config.seen: 2024-06-10T11:19:06.739127245Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dcd8d5c9c8cc1d7f6550cc6d27b429fa8028411f6868b679a6883186ce6898e2,Metada
ta:&PodSandboxMetadata{Name:kube-controller-manager-multinode-862380,Uid:0f4531b47a5c5353a3b6d9c833bc5c53,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018347201521896,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0f4531b47a5c5353a3b6d9c833bc5c53,kubernetes.io/config.seen: 2024-06-10T11:19:06.739131912Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1a238893e319e44879cd357493747cefc3bd8860f007d2383c98f0d686678db0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-862380,Uid:8d5215e23358f00a13bf40785087f55d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018347194923463,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io
.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8d5215e23358f00a13bf40785087f55d,kubernetes.io/config.seen: 2024-06-10T11:19:06.739132935Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b2ba625d3d8f5417652f5e20ac755f7fd3a72975d10e8ac6dd75ff553730dae,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-862380,Uid:403c273aa070281af0f1949448b47864,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1718018347194311848,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.100:8443,kuberne
tes.io/config.hash: 403c273aa070281af0f1949448b47864,kubernetes.io/config.seen: 2024-06-10T11:19:06.739130582Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cfbc0a4db39045ee382b6a54d8d5f5da4410877bfde75f2ee86af08cede879e0,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-jx8f9,Uid:237e1205-8c4b-4234-ad0f-80e35f097827,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718018045121588532,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:14:04.808575248Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:41beb7220db38d30d9a9e09ec9c7a266465505827ab8beb5023e3e210a3baa7b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7966a309-dca2-488e-b683-0ff37fa01fe3,Namespace:kube-system,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1718018002713151508,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path
\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-10T11:13:22.404497399Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:024549fd085df2c3f26e3b57056e36220f606174179776d0ec5517d7ab213ed2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-vfxw9,Uid:56f70aa4-9ef6-4257-86b3-4fd0968b2e37,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718018002703520957,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:13:22.396356706Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47791e1db12ccb5a3125bf15245a19e55a3ce586fd87ad323ea1f816731386b1,Metadata:&PodSandboxMetadata{Name:kindnet-bnpjz,Uid:6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,Namespace:kube-sys
tem,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017997735974005,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:13:17.417649738Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2c6585397cfe84addb16de8bb37037463d7253e6320d81daa859502341f8f85,Metadata:&PodSandboxMetadata{Name:kube-proxy-gghfj,Uid:d6793da8-f52b-488b-a0ec-88cbf6460c13,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017997735425640,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,k8s-app: kub
e-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T11:13:17.421940696Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:10c8e06b75105c6690ee540a76a09dcc7cc12fcbdf5b36d4eb25ead4778cc4c1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-862380,Uid:403c273aa070281af0f1949448b47864,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017978495105745,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.100:8443,kubernetes.io/config.hash: 403c273aa070281af0f1949448b47864,kubernetes.io/config.seen: 2024-06-10T11:12:58.021247059Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:04f27b50f52704
344dd889054f4cf6da33cebd323a5db935ef89eb4abe78ffe8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-862380,Uid:0f4531b47a5c5353a3b6d9c833bc5c53,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017978491863068,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0f4531b47a5c5353a3b6d9c833bc5c53,kubernetes.io/config.seen: 2024-06-10T11:12:58.021248311Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c88f109c2c83a6337b70493edeaa6bdda09624f9dbef45778d2ef091c19aeac1,Metadata:&PodSandboxMetadata{Name:etcd-multinode-862380,Uid:134cbc49aee8e613a34fe93b9347c702,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017978489978175,Labels:map[string]string{component
: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.100:2379,kubernetes.io/config.hash: 134cbc49aee8e613a34fe93b9347c702,kubernetes.io/config.seen: 2024-06-10T11:12:58.021242290Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dc44bfa9ee46200e44408345aa810713cfebf553e56e6a32f65ec6bd305edeb0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-862380,Uid:8d5215e23358f00a13bf40785087f55d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1718017978472157614,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,tier: control-plane,},Annotati
ons:map[string]string{kubernetes.io/config.hash: 8d5215e23358f00a13bf40785087f55d,kubernetes.io/config.seen: 2024-06-10T11:12:58.021249617Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=ae0cb269-7ca1-4455-b699-8b4800d0dafe name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.636069079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6de4e484-ad96-481b-8605-b8d78e544d42 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.636121932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6de4e484-ad96-481b-8605-b8d78e544d42 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.636424403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fc79e4de71d445e66c76ebc879593d2599c2c77229107f2a96a78737d49d6e,PodSandboxId:1daffe5524d188139839a6b1b96ad5ca5edfb98a6eff8bb442212a5c47d51c59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718018385981027011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d239e403b99cbc595846d609ca3877c0378cd522cc51a4ef8e62481693d5022,PodSandboxId:fe929d942cc9e63e145c553e0aa9f5268b3af05b033b39c69c2f4bf196375602,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718018352558458161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8e67a6d2840c27a7e9918a80a0a0c785dc7b6d2bd90a358d542bc6a1aabe74,PodSandboxId:abfa9aa50974623da5a50a69184494c217cf08dbc6007db84d76e812590ddb52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718018352479657098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fe
db5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55fbec1ed1f5c35125219a44fd079a722d49d9d8cbdb2455f8a70f01da71ed4e,PodSandboxId:4517c9efbd8541d8d1d37f445a576a5f35bb0182780f23bc213b682f1e16ae21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718018352360363094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]
string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a3bf0e596a0cca6f9831fcb9b458d5e853147197c42b8d6060f07e94f173f5,PodSandboxId:cdcc0f30f293274460a437197df073c4e406ed920aab513665fb6c4a8b4d8b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718018352319151234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e188af5e7614aace4ffe7147aadf26b4ae34f2212f99727a96e4a432272564dc,PodSandboxId:87bf102e2b2943355dabc72d3e0980da5c49276950d1ad4b2fc9c2f1f768e8e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718018347445798005,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dceb31898cc620cff1b69f4b915cc293db2955ad4fdfa09aaf24f4ba57bde1,PodSandboxId:dcd8d5c9c8cc1d7f6550cc6d27b429fa8028411f6868b679a6883186ce6898e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718018347411124949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5
e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50310344784fc7f085c0a0d226fde85f9b838c4bcfeaafbde1cf90adf4432aee,PodSandboxId:1a238893e319e44879cd357493747cefc3bd8860f007d2383c98f0d686678db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718018347413342565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5971ca1a108b34acbf6ae63f70db7b15d696e6cd577d1f3356a2b6661bb028d8,PodSandboxId:0b2ba625d3d8f5417652f5e20ac755f7fd3a72975d10e8ac6dd75ff553730dae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718018347339762997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc5324b6db46ad8a78594835c98c73f0f42d1c87636abde9b15fb4cbd4d2151,PodSandboxId:cfbc0a4db39045ee382b6a54d8d5f5da4410877bfde75f2ee86af08cede879e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718018047623152578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90,PodSandboxId:024549fd085df2c3f26e3b57056e36220f606174179776d0ec5517d7ab213ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718018002906701577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f,PodSandboxId:41beb7220db38d30d9a9e09ec9c7a266465505827ab8beb5023e3e210a3baa7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718018002842630237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc,PodSandboxId:47791e1db12ccb5a3125bf15245a19e55a3ce586fd87ad323ea1f816731386b1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718018001431173205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c,PodSandboxId:a2c6585397cfe84addb16de8bb37037463d7253e6320d81daa859502341f8f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718017997985587185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976,PodSandboxId:c88f109c2c83a6337b70493edeaa6bdda09624f9dbef45778d2ef091c19aeac1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718017978705276425,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702
,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f,PodSandboxId:dc44bfa9ee46200e44408345aa810713cfebf553e56e6a32f65ec6bd305edeb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718017978724535495,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:
map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266,PodSandboxId:10c8e06b75105c6690ee540a76a09dcc7cc12fcbdf5b36d4eb25ead4778cc4c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017978654023906,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb,PodSandboxId:04f27b50f52704344dd889054f4cf6da33cebd323a5db935ef89eb4abe78ffe8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017978635289553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6de4e484-ad96-481b-8605-b8d78e544d42 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.675721412Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25f8283d-720f-4a11-bd58-259e20077ede name=/runtime.v1.RuntimeService/Version
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.675812625Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25f8283d-720f-4a11-bd58-259e20077ede name=/runtime.v1.RuntimeService/Version
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.677009259Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=32b1f903-92cd-4f21-a22a-e92bdc27c944 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.677429110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718018572677407033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32b1f903-92cd-4f21-a22a-e92bdc27c944 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.678007423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a06ed4d-7513-484a-bc26-e2870ed87b20 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.678084975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a06ed4d-7513-484a-bc26-e2870ed87b20 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.678407512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fc79e4de71d445e66c76ebc879593d2599c2c77229107f2a96a78737d49d6e,PodSandboxId:1daffe5524d188139839a6b1b96ad5ca5edfb98a6eff8bb442212a5c47d51c59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718018385981027011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d239e403b99cbc595846d609ca3877c0378cd522cc51a4ef8e62481693d5022,PodSandboxId:fe929d942cc9e63e145c553e0aa9f5268b3af05b033b39c69c2f4bf196375602,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718018352558458161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8e67a6d2840c27a7e9918a80a0a0c785dc7b6d2bd90a358d542bc6a1aabe74,PodSandboxId:abfa9aa50974623da5a50a69184494c217cf08dbc6007db84d76e812590ddb52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718018352479657098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fe
db5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55fbec1ed1f5c35125219a44fd079a722d49d9d8cbdb2455f8a70f01da71ed4e,PodSandboxId:4517c9efbd8541d8d1d37f445a576a5f35bb0182780f23bc213b682f1e16ae21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718018352360363094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]
string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a3bf0e596a0cca6f9831fcb9b458d5e853147197c42b8d6060f07e94f173f5,PodSandboxId:cdcc0f30f293274460a437197df073c4e406ed920aab513665fb6c4a8b4d8b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718018352319151234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e188af5e7614aace4ffe7147aadf26b4ae34f2212f99727a96e4a432272564dc,PodSandboxId:87bf102e2b2943355dabc72d3e0980da5c49276950d1ad4b2fc9c2f1f768e8e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718018347445798005,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dceb31898cc620cff1b69f4b915cc293db2955ad4fdfa09aaf24f4ba57bde1,PodSandboxId:dcd8d5c9c8cc1d7f6550cc6d27b429fa8028411f6868b679a6883186ce6898e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718018347411124949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5
e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50310344784fc7f085c0a0d226fde85f9b838c4bcfeaafbde1cf90adf4432aee,PodSandboxId:1a238893e319e44879cd357493747cefc3bd8860f007d2383c98f0d686678db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718018347413342565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5971ca1a108b34acbf6ae63f70db7b15d696e6cd577d1f3356a2b6661bb028d8,PodSandboxId:0b2ba625d3d8f5417652f5e20ac755f7fd3a72975d10e8ac6dd75ff553730dae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718018347339762997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc5324b6db46ad8a78594835c98c73f0f42d1c87636abde9b15fb4cbd4d2151,PodSandboxId:cfbc0a4db39045ee382b6a54d8d5f5da4410877bfde75f2ee86af08cede879e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718018047623152578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90,PodSandboxId:024549fd085df2c3f26e3b57056e36220f606174179776d0ec5517d7ab213ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718018002906701577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f,PodSandboxId:41beb7220db38d30d9a9e09ec9c7a266465505827ab8beb5023e3e210a3baa7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718018002842630237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc,PodSandboxId:47791e1db12ccb5a3125bf15245a19e55a3ce586fd87ad323ea1f816731386b1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718018001431173205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c,PodSandboxId:a2c6585397cfe84addb16de8bb37037463d7253e6320d81daa859502341f8f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718017997985587185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976,PodSandboxId:c88f109c2c83a6337b70493edeaa6bdda09624f9dbef45778d2ef091c19aeac1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718017978705276425,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702
,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f,PodSandboxId:dc44bfa9ee46200e44408345aa810713cfebf553e56e6a32f65ec6bd305edeb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718017978724535495,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:
map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266,PodSandboxId:10c8e06b75105c6690ee540a76a09dcc7cc12fcbdf5b36d4eb25ead4778cc4c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017978654023906,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb,PodSandboxId:04f27b50f52704344dd889054f4cf6da33cebd323a5db935ef89eb4abe78ffe8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017978635289553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a06ed4d-7513-484a-bc26-e2870ed87b20 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.722421734Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0553ce33-9ed9-43f9-9857-5f896659838f name=/runtime.v1.RuntimeService/Version
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.722501556Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0553ce33-9ed9-43f9-9857-5f896659838f name=/runtime.v1.RuntimeService/Version
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.723573918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4717e2fe-4457-4474-a703-632c3ea8a87d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.724149717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718018572724126568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4717e2fe-4457-4474-a703-632c3ea8a87d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.724552153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41423eae-7367-477c-9094-a2cc0b1ec1c5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.724658459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41423eae-7367-477c-9094-a2cc0b1ec1c5 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:22:52 multinode-862380 crio[2864]: time="2024-06-10 11:22:52.724987227Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2fc79e4de71d445e66c76ebc879593d2599c2c77229107f2a96a78737d49d6e,PodSandboxId:1daffe5524d188139839a6b1b96ad5ca5edfb98a6eff8bb442212a5c47d51c59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1718018385981027011,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d239e403b99cbc595846d609ca3877c0378cd522cc51a4ef8e62481693d5022,PodSandboxId:fe929d942cc9e63e145c553e0aa9f5268b3af05b033b39c69c2f4bf196375602,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718018352558458161,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d8e67a6d2840c27a7e9918a80a0a0c785dc7b6d2bd90a358d542bc6a1aabe74,PodSandboxId:abfa9aa50974623da5a50a69184494c217cf08dbc6007db84d76e812590ddb52,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1718018352479657098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fe
db5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55fbec1ed1f5c35125219a44fd079a722d49d9d8cbdb2455f8a70f01da71ed4e,PodSandboxId:4517c9efbd8541d8d1d37f445a576a5f35bb0182780f23bc213b682f1e16ae21,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718018352360363094,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]
string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0a3bf0e596a0cca6f9831fcb9b458d5e853147197c42b8d6060f07e94f173f5,PodSandboxId:cdcc0f30f293274460a437197df073c4e406ed920aab513665fb6c4a8b4d8b15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718018352319151234,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.ku
bernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e188af5e7614aace4ffe7147aadf26b4ae34f2212f99727a96e4a432272564dc,PodSandboxId:87bf102e2b2943355dabc72d3e0980da5c49276950d1ad4b2fc9c2f1f768e8e0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718018347445798005,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43dceb31898cc620cff1b69f4b915cc293db2955ad4fdfa09aaf24f4ba57bde1,PodSandboxId:dcd8d5c9c8cc1d7f6550cc6d27b429fa8028411f6868b679a6883186ce6898e2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718018347411124949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5
e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50310344784fc7f085c0a0d226fde85f9b838c4bcfeaafbde1cf90adf4432aee,PodSandboxId:1a238893e319e44879cd357493747cefc3bd8860f007d2383c98f0d686678db0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718018347413342565,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5971ca1a108b34acbf6ae63f70db7b15d696e6cd577d1f3356a2b6661bb028d8,PodSandboxId:0b2ba625d3d8f5417652f5e20ac755f7fd3a72975d10e8ac6dd75ff553730dae,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718018347339762997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]string{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cc5324b6db46ad8a78594835c98c73f0f42d1c87636abde9b15fb4cbd4d2151,PodSandboxId:cfbc0a4db39045ee382b6a54d8d5f5da4410877bfde75f2ee86af08cede879e0,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1718018047623152578,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-jx8f9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 237e1205-8c4b-4234-ad0f-80e35f097827,},Annotations:map[string]string{io.kubernetes.container.hash: 6b71ca20,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90,PodSandboxId:024549fd085df2c3f26e3b57056e36220f606174179776d0ec5517d7ab213ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1718018002906701577,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vfxw9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56f70aa4-9ef6-4257-86b3-4fd0968b2e37,},Annotations:map[string]string{io.kubernetes.container.hash: 3bb49cae,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0bc49dc154cf467f6f2dd93ab0e78907f6d0f8592e164371108706cc509e00f,PodSandboxId:41beb7220db38d30d9a9e09ec9c7a266465505827ab8beb5023e3e210a3baa7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718018002842630237,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 7966a309-dca2-488e-b683-0ff37fa01fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 886eec8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc,PodSandboxId:47791e1db12ccb5a3125bf15245a19e55a3ce586fd87ad323ea1f816731386b1,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1718018001431173205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-bnpjz,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 6d6d1e96-ea64-4ea0-855a-0e8fedb5164d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f1d502d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c,PodSandboxId:a2c6585397cfe84addb16de8bb37037463d7253e6320d81daa859502341f8f85,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1718017997985587185,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gghfj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: d6793da8-f52b-488b-a0ec-88cbf6460c13,},Annotations:map[string]string{io.kubernetes.container.hash: ab55db52,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976,PodSandboxId:c88f109c2c83a6337b70493edeaa6bdda09624f9dbef45778d2ef091c19aeac1,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718017978705276425,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 134cbc49aee8e613a34fe93b9347c702
,},Annotations:map[string]string{io.kubernetes.container.hash: 9e626184,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f,PodSandboxId:dc44bfa9ee46200e44408345aa810713cfebf553e56e6a32f65ec6bd305edeb0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718017978724535495,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d5215e23358f00a13bf40785087f55d,},Annotations:
map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266,PodSandboxId:10c8e06b75105c6690ee540a76a09dcc7cc12fcbdf5b36d4eb25ead4778cc4c1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718017978654023906,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 403c273aa070281af0f1949448b47864,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f40b1b90,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb,PodSandboxId:04f27b50f52704344dd889054f4cf6da33cebd323a5db935ef89eb4abe78ffe8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718017978635289553,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-862380,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f4531b47a5c5353a3b6d9c833bc5c53,},Annotations:map
[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41423eae-7367-477c-9094-a2cc0b1ec1c5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c2fc79e4de71d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   1daffe5524d18       busybox-fc5497c4f-jx8f9
	1d239e403b99c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   fe929d942cc9e       coredns-7db6d8ff4d-vfxw9
	5d8e67a6d2840       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               1                   abfa9aa509746       kindnet-bnpjz
	55fbec1ed1f5c       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      3 minutes ago       Running             kube-proxy                1                   4517c9efbd854       kube-proxy-gghfj
	a0a3bf0e596a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   cdcc0f30f2932       storage-provisioner
	e188af5e7614a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   87bf102e2b294       etcd-multinode-862380
	50310344784fc       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      3 minutes ago       Running             kube-scheduler            1                   1a238893e319e       kube-scheduler-multinode-862380
	43dceb31898cc       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago       Running             kube-controller-manager   1                   dcd8d5c9c8cc1       kube-controller-manager-multinode-862380
	5971ca1a108b3       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago       Running             kube-apiserver            1                   0b2ba625d3d8f       kube-apiserver-multinode-862380
	7cc5324b6db46       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   cfbc0a4db3904       busybox-fc5497c4f-jx8f9
	e0cb3861c89e3       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   024549fd085df       coredns-7db6d8ff4d-vfxw9
	b0bc49dc154cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   41beb7220db38       storage-provisioner
	f2791a953a920       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    9 minutes ago       Exited              kindnet-cni               0                   47791e1db12cc       kindnet-bnpjz
	d7dcbfcd0f6f9       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      9 minutes ago       Exited              kube-proxy                0                   a2c6585397cfe       kube-proxy-gghfj
	9c465791f6493       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      9 minutes ago       Exited              kube-scheduler            0                   dc44bfa9ee462       kube-scheduler-multinode-862380
	e7b3e1262dc38       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   c88f109c2c83a       etcd-multinode-862380
	58557fa016e58       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      9 minutes ago       Exited              kube-apiserver            0                   10c8e06b75105       kube-apiserver-multinode-862380
	4f84f021658bb       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      9 minutes ago       Exited              kube-controller-manager   0                   04f27b50f5270       kube-controller-manager-multinode-862380
	
	
	==> coredns [1d239e403b99cbc595846d609ca3877c0378cd522cc51a4ef8e62481693d5022] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36247 - 62168 "HINFO IN 1200695844873085136.5283719998216550195. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023148131s
	
	
	==> coredns [e0cb3861c89e33df4af9682d4ecbad3f6bbc0a9150d26e80be390d8550cd3e90] <==
	[INFO] 10.244.1.2:54313 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002039405s
	[INFO] 10.244.1.2:36796 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099531s
	[INFO] 10.244.1.2:42431 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000067579s
	[INFO] 10.244.1.2:35027 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002059486s
	[INFO] 10.244.1.2:48138 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000138374s
	[INFO] 10.244.1.2:57481 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072838s
	[INFO] 10.244.1.2:58012 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000082046s
	[INFO] 10.244.0.3:34666 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000072309s
	[INFO] 10.244.0.3:42571 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000508s
	[INFO] 10.244.0.3:33740 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000046907s
	[INFO] 10.244.0.3:52883 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000032661s
	[INFO] 10.244.1.2:55811 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132338s
	[INFO] 10.244.1.2:44313 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000083809s
	[INFO] 10.244.1.2:45315 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082396s
	[INFO] 10.244.1.2:40327 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067705s
	[INFO] 10.244.0.3:53262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125203s
	[INFO] 10.244.0.3:33362 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117514s
	[INFO] 10.244.0.3:55521 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000171111s
	[INFO] 10.244.0.3:34043 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000080604s
	[INFO] 10.244.1.2:42263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117724s
	[INFO] 10.244.1.2:48635 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000123254s
	[INFO] 10.244.1.2:42541 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119423s
	[INFO] 10.244.1.2:52962 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121279s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-862380
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-862380
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-862380
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T11_13_04_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:13:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-862380
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:22:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:19:11 +0000   Mon, 10 Jun 2024 11:12:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:19:11 +0000   Mon, 10 Jun 2024 11:12:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:19:11 +0000   Mon, 10 Jun 2024 11:12:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:19:11 +0000   Mon, 10 Jun 2024 11:13:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    multinode-862380
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8956567b7bc94df6916f5e4faa01fbfb
	  System UUID:                8956567b-7bc9-4df6-916f-5e4faa01fbfb
	  Boot ID:                    9746547f-4a12-4129-881a-ffbf15d2057e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-jx8f9                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m49s
	  kube-system                 coredns-7db6d8ff4d-vfxw9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m36s
	  kube-system                 etcd-multinode-862380                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m50s
	  kube-system                 kindnet-bnpjz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m36s
	  kube-system                 kube-apiserver-multinode-862380             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 kube-controller-manager-multinode-862380    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 kube-proxy-gghfj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 kube-scheduler-multinode-862380             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m50s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m34s                  kube-proxy       
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  Starting                 9m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m55s (x8 over 9m55s)  kubelet          Node multinode-862380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m55s (x8 over 9m55s)  kubelet          Node multinode-862380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m55s (x7 over 9m55s)  kubelet          Node multinode-862380 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m50s                  kubelet          Node multinode-862380 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  9m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m50s                  kubelet          Node multinode-862380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     9m50s                  kubelet          Node multinode-862380 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m50s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m37s                  node-controller  Node multinode-862380 event: Registered Node multinode-862380 in Controller
	  Normal  NodeReady                9m31s                  kubelet          Node multinode-862380 status is now: NodeReady
	  Normal  Starting                 3m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m47s (x8 over 3m47s)  kubelet          Node multinode-862380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m47s (x8 over 3m47s)  kubelet          Node multinode-862380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s (x7 over 3m47s)  kubelet          Node multinode-862380 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m30s                  node-controller  Node multinode-862380 event: Registered Node multinode-862380 in Controller
	
	
	Name:               multinode-862380-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-862380-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=multinode-862380
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_06_10T11_19_50_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:19:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-862380-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:20:30 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 10 Jun 2024 11:20:20 +0000   Mon, 10 Jun 2024 11:21:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 10 Jun 2024 11:20:20 +0000   Mon, 10 Jun 2024 11:21:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 10 Jun 2024 11:20:20 +0000   Mon, 10 Jun 2024 11:21:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 10 Jun 2024 11:20:20 +0000   Mon, 10 Jun 2024 11:21:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.47
	  Hostname:    multinode-862380-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 32d4e1e72b73492d8bcbcbaf9ac8e1d9
	  System UUID:                32d4e1e7-2b73-492d-8bcb-cbaf9ac8e1d9
	  Boot ID:                    2bb01a2f-dd28-47e1-b530-fb3cdee20701
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-v8jhp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 kindnet-ctwr4              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m1s
	  kube-system                 kube-proxy-n8lzw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m55s                kube-proxy       
	  Normal  Starting                 2m59s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m1s (x2 over 9m1s)  kubelet          Node multinode-862380-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m1s (x2 over 9m1s)  kubelet          Node multinode-862380-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m1s (x2 over 9m1s)  kubelet          Node multinode-862380-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                8m51s                kubelet          Node multinode-862380-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node multinode-862380-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node multinode-862380-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node multinode-862380-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node multinode-862380-m02 event: Registered Node multinode-862380-m02 in Controller
	  Normal  NodeReady                2m56s                kubelet          Node multinode-862380-m02 status is now: NodeReady
	  Normal  NodeNotReady             100s                 node-controller  Node multinode-862380-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.053278] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.158847] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.141099] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.248906] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +3.896513] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +3.984003] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.065848] kauditd_printk_skb: 158 callbacks suppressed
	[Jun10 11:13] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +0.069613] kauditd_printk_skb: 69 callbacks suppressed
	[ +14.040712] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.106567] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.001235] kauditd_printk_skb: 60 callbacks suppressed
	[Jun10 11:14] kauditd_printk_skb: 14 callbacks suppressed
	[Jun10 11:18] systemd-fstab-generator[2776]: Ignoring "noauto" option for root device
	[  +0.145732] systemd-fstab-generator[2788]: Ignoring "noauto" option for root device
	[  +0.171468] systemd-fstab-generator[2802]: Ignoring "noauto" option for root device
	[  +0.133880] systemd-fstab-generator[2814]: Ignoring "noauto" option for root device
	[  +0.267198] systemd-fstab-generator[2842]: Ignoring "noauto" option for root device
	[Jun10 11:19] systemd-fstab-generator[2949]: Ignoring "noauto" option for root device
	[  +2.123912] systemd-fstab-generator[3071]: Ignoring "noauto" option for root device
	[  +0.081701] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.583360] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.470341] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.972924] systemd-fstab-generator[3880]: Ignoring "noauto" option for root device
	[ +21.276139] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [e188af5e7614aace4ffe7147aadf26b4ae34f2212f99727a96e4a432272564dc] <==
	{"level":"info","ts":"2024-06-10T11:19:07.956639Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T11:19:07.95665Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T11:19:07.957215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 switched to configuration voters=(3636168928135421492)"}
	{"level":"info","ts":"2024-06-10T11:19:07.957324Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","added-peer-id":"3276445ff8d31e34","added-peer-peer-urls":["https://192.168.39.100:2380"]}
	{"level":"info","ts":"2024-06-10T11:19:07.959681Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:19:07.959768Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:19:07.964388Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-10T11:19:07.972873Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3276445ff8d31e34","initial-advertise-peer-urls":["https://192.168.39.100:2380"],"listen-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.100:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-10T11:19:07.97304Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-10T11:19:07.964785Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-06-10T11:19:07.976218Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-06-10T11:19:09.597332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-10T11:19:09.597383Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-10T11:19:09.597431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgPreVoteResp from 3276445ff8d31e34 at term 2"}
	{"level":"info","ts":"2024-06-10T11:19:09.597444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became candidate at term 3"}
	{"level":"info","ts":"2024-06-10T11:19:09.597449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgVoteResp from 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-06-10T11:19:09.597457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became leader at term 3"}
	{"level":"info","ts":"2024-06-10T11:19:09.597467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3276445ff8d31e34 elected leader 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-06-10T11:19:09.602782Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"3276445ff8d31e34","local-member-attributes":"{Name:multinode-862380 ClientURLs:[https://192.168.39.100:2379]}","request-path":"/0/members/3276445ff8d31e34/attributes","cluster-id":"6cf58294dcaef1c8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T11:19:09.60293Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:19:09.60321Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T11:19:09.603281Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T11:19:09.603354Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:19:09.605203Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2024-06-10T11:19:09.605219Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [e7b3e1262dc380437d24a63b8d3b43827f62b39b385c799ae1a3c75195a3b976] <==
	{"level":"info","ts":"2024-06-10T11:12:59.902914Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:12:59.903237Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:12:59.903369Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T11:12:59.903411Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T11:12:59.903727Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:12:59.903832Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:12:59.90387Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:12:59.905259Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T11:12:59.91302Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2024-06-10T11:13:52.704222Z","caller":"traceutil/trace.go:171","msg":"trace[2123348808] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"200.267207ms","start":"2024-06-10T11:13:52.503933Z","end":"2024-06-10T11:13:52.7042Z","steps":["trace[2123348808] 'process raft request'  (duration: 200.213634ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:13:52.704238Z","caller":"traceutil/trace.go:171","msg":"trace[413522120] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"264.989925ms","start":"2024-06-10T11:13:52.439231Z","end":"2024-06-10T11:13:52.704221Z","steps":["trace[413522120] 'process raft request'  (duration: 235.582093ms)","trace[413522120] 'compare'  (duration: 29.228493ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T11:14:35.800583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.497195ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2176522857552705310 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-862380-m03.17d7a056559f99e5\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-862380-m03.17d7a056559f99e5\" value_size:646 lease:2176522857552705035 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-06-10T11:14:35.801111Z","caller":"traceutil/trace.go:171","msg":"trace[172763151] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"249.521196ms","start":"2024-06-10T11:14:35.551554Z","end":"2024-06-10T11:14:35.801075Z","steps":["trace[172763151] 'process raft request'  (duration: 79.446624ms)","trace[172763151] 'compare'  (duration: 168.303313ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T11:14:35.801307Z","caller":"traceutil/trace.go:171","msg":"trace[1401501465] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"186.092785ms","start":"2024-06-10T11:14:35.615201Z","end":"2024-06-10T11:14:35.801294Z","steps":["trace[1401501465] 'process raft request'  (duration: 185.848442ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:14:41.581876Z","caller":"traceutil/trace.go:171","msg":"trace[56415643] transaction","detail":"{read_only:false; response_revision:643; number_of_response:1; }","duration":"175.924925ms","start":"2024-06-10T11:14:41.405936Z","end":"2024-06-10T11:14:41.581861Z","steps":["trace[56415643] 'process raft request'  (duration: 175.808057ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:17:28.684927Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-06-10T11:17:28.685073Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-862380","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	{"level":"warn","ts":"2024-06-10T11:17:28.685184Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T11:17:28.685269Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T11:17:28.76712Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-06-10T11:17:28.767349Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.100:2379: use of closed network connection"}
	{"level":"info","ts":"2024-06-10T11:17:28.7675Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3276445ff8d31e34","current-leader-member-id":"3276445ff8d31e34"}
	{"level":"info","ts":"2024-06-10T11:17:28.770027Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-06-10T11:17:28.770184Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-06-10T11:17:28.770218Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-862380","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"]}
	
	
	==> kernel <==
	 11:22:53 up 10 min,  0 users,  load average: 0.06, 0.16, 0.11
	Linux multinode-862380 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [5d8e67a6d2840c27a7e9918a80a0a0c785dc7b6d2bd90a358d542bc6a1aabe74] <==
	I0610 11:21:43.420806       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:21:53.425675       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:21:53.425716       1 main.go:227] handling current node
	I0610 11:21:53.425726       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:21:53.425743       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:22:03.433501       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:22:03.433675       1 main.go:227] handling current node
	I0610 11:22:03.433756       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:22:03.433781       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:22:13.438492       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:22:13.438544       1 main.go:227] handling current node
	I0610 11:22:13.438562       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:22:13.438567       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:22:23.443385       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:22:23.443582       1 main.go:227] handling current node
	I0610 11:22:23.443670       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:22:23.443694       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:22:33.457336       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:22:33.457376       1 main.go:227] handling current node
	I0610 11:22:33.457391       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:22:33.457397       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:22:43.463429       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:22:43.463570       1 main.go:227] handling current node
	I0610 11:22:43.463636       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:22:43.463660       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [f2791a953a9200b3f61b8829c703b259f1483f87c5e99ce9cfaa18109775e0fc] <==
	I0610 11:16:42.182660       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:16:52.195382       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:16:52.195667       1 main.go:227] handling current node
	I0610 11:16:52.195712       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:16:52.195733       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:16:52.196527       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:16:52.196576       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:17:02.201120       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:17:02.201157       1 main.go:227] handling current node
	I0610 11:17:02.201170       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:17:02.201174       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:17:02.201290       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:17:02.201310       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:17:12.206011       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:17:12.206052       1 main.go:227] handling current node
	I0610 11:17:12.206077       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:17:12.206082       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:17:12.206206       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:17:12.206223       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	I0610 11:17:22.210300       1 main.go:223] Handling node with IPs: map[192.168.39.100:{}]
	I0610 11:17:22.210339       1 main.go:227] handling current node
	I0610 11:17:22.210349       1 main.go:223] Handling node with IPs: map[192.168.39.47:{}]
	I0610 11:17:22.210354       1 main.go:250] Node multinode-862380-m02 has CIDR [10.244.1.0/24] 
	I0610 11:17:22.210478       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0610 11:17:22.210498       1 main.go:250] Node multinode-862380-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [58557fa016e58b7c0cbd020c0c94ce71b80658955335b632f9b63f06aaec7266] <==
	I0610 11:13:02.123301       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 11:13:02.123905       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 11:13:02.777087       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 11:13:02.828765       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 11:13:02.954007       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0610 11:13:02.965284       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.100]
	I0610 11:13:02.966672       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 11:13:02.971527       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 11:13:03.186545       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 11:13:03.905440       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 11:13:03.923077       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0610 11:13:03.947471       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 11:13:17.392025       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0610 11:13:17.443188       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0610 11:14:09.028969       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60212: use of closed network connection
	E0610 11:14:09.201865       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60226: use of closed network connection
	E0610 11:14:09.398218       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60240: use of closed network connection
	E0610 11:14:09.568016       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60260: use of closed network connection
	E0610 11:14:09.734030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60272: use of closed network connection
	E0610 11:14:09.901761       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60288: use of closed network connection
	E0610 11:14:10.167362       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60324: use of closed network connection
	E0610 11:14:10.328305       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60356: use of closed network connection
	E0610 11:14:10.486910       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60376: use of closed network connection
	E0610 11:14:10.646028       1 conn.go:339] Error on socket receive: read tcp 192.168.39.100:8443->192.168.39.1:60394: use of closed network connection
	I0610 11:17:28.677586       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [5971ca1a108b34acbf6ae63f70db7b15d696e6cd577d1f3356a2b6661bb028d8] <==
	I0610 11:19:10.942920       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 11:19:10.947489       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 11:19:10.950676       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 11:19:10.967143       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 11:19:10.971470       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 11:19:10.950406       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0610 11:19:10.950485       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 11:19:10.950497       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 11:19:10.950506       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 11:19:10.974055       1 aggregator.go:165] initial CRD sync complete...
	I0610 11:19:10.974063       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 11:19:10.974067       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 11:19:10.974072       1 cache.go:39] Caches are synced for autoregister controller
	E0610 11:19:10.983113       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0610 11:19:11.013538       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 11:19:11.013675       1 policy_source.go:224] refreshing policies
	I0610 11:19:11.056087       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 11:19:11.871471       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 11:19:13.259778       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 11:19:13.392805       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 11:19:13.409232       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 11:19:13.477285       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 11:19:13.496061       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 11:19:23.584430       1 controller.go:615] quota admission added evaluator for: endpoints
	I0610 11:19:23.656893       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [43dceb31898cc620cff1b69f4b915cc293db2955ad4fdfa09aaf24f4ba57bde1] <==
	I0610 11:19:49.162087       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-862380-m02" podCIDRs=["10.244.1.0/24"]
	I0610 11:19:49.836516       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.849µs"
	I0610 11:19:51.028937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.15µs"
	I0610 11:19:51.039967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.412µs"
	I0610 11:19:51.050531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.741µs"
	I0610 11:19:51.088299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.973µs"
	I0610 11:19:51.100367       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.58µs"
	I0610 11:19:51.102471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.533µs"
	I0610 11:19:57.905355       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:19:57.926313       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.938µs"
	I0610 11:19:57.957968       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.28µs"
	I0610 11:20:01.519676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.419629ms"
	I0610 11:20:01.520001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="123.31µs"
	I0610 11:20:15.948796       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:20:17.066171       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:20:17.066546       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-862380-m03\" does not exist"
	I0610 11:20:17.081113       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-862380-m03" podCIDRs=["10.244.2.0/24"]
	I0610 11:20:26.281562       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:20:31.390975       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:21:13.565224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.713307ms"
	I0610 11:21:13.565956       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.077µs"
	I0610 11:21:23.518684       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mqzsw"
	I0610 11:21:23.540454       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-mqzsw"
	I0610 11:21:23.540493       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-7gbwh"
	I0610 11:21:23.563951       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-7gbwh"
	
	
	==> kube-controller-manager [4f84f021658bb7edbb72828c3cdce1348895737f86d83744cb73982fa6cdc4cb] <==
	I0610 11:13:52.708660       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-862380-m02\" does not exist"
	I0610 11:13:52.722288       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-862380-m02" podCIDRs=["10.244.1.0/24"]
	I0610 11:13:56.599440       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-862380-m02"
	I0610 11:14:02.696231       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:14:04.812013       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.073902ms"
	I0610 11:14:04.844703       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.653908ms"
	I0610 11:14:04.856741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.989681ms"
	I0610 11:14:04.856833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.253µs"
	I0610 11:14:08.135921       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.773641ms"
	I0610 11:14:08.136390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="108.171µs"
	I0610 11:14:08.608550       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.698712ms"
	I0610 11:14:08.608717       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.332µs"
	I0610 11:14:35.805259       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:14:35.813072       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-862380-m03\" does not exist"
	I0610 11:14:35.844034       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-862380-m03" podCIDRs=["10.244.2.0/24"]
	I0610 11:14:36.619041       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-862380-m03"
	I0610 11:14:45.309474       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:15:13.364901       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:15:14.709349       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-862380-m03\" does not exist"
	I0610 11:15:14.710042       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:15:14.728756       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-862380-m03" podCIDRs=["10.244.3.0/24"]
	I0610 11:15:23.264974       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m02"
	I0610 11:16:06.669704       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-862380-m03"
	I0610 11:16:06.711832       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.68571ms"
	I0610 11:16:06.711963       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="90.605µs"
	
	
	==> kube-proxy [55fbec1ed1f5c35125219a44fd079a722d49d9d8cbdb2455f8a70f01da71ed4e] <==
	I0610 11:19:12.780213       1 server_linux.go:69] "Using iptables proxy"
	I0610 11:19:12.859163       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0610 11:19:12.979822       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 11:19:12.979893       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 11:19:12.979909       1 server_linux.go:165] "Using iptables Proxier"
	I0610 11:19:12.984330       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 11:19:12.984557       1 server.go:872] "Version info" version="v1.30.1"
	I0610 11:19:12.984646       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:19:12.985979       1 config.go:192] "Starting service config controller"
	I0610 11:19:12.986048       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 11:19:12.986092       1 config.go:101] "Starting endpoint slice config controller"
	I0610 11:19:12.986109       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 11:19:12.986671       1 config.go:319] "Starting node config controller"
	I0610 11:19:12.986710       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 11:19:13.086408       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 11:19:13.086461       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:19:13.088348       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d7dcbfcd0f6f950677096624f71b7ec58dbe647a45bfe1896dd52dd14753a55c] <==
	I0610 11:13:18.447696       1 server_linux.go:69] "Using iptables proxy"
	I0610 11:13:18.456560       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.100"]
	I0610 11:13:18.518217       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 11:13:18.518283       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 11:13:18.518306       1 server_linux.go:165] "Using iptables Proxier"
	I0610 11:13:18.523051       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 11:13:18.526662       1 server.go:872] "Version info" version="v1.30.1"
	I0610 11:13:18.528913       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:13:18.535521       1 config.go:192] "Starting service config controller"
	I0610 11:13:18.535558       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 11:13:18.535590       1 config.go:101] "Starting endpoint slice config controller"
	I0610 11:13:18.535623       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 11:13:18.537791       1 config.go:319] "Starting node config controller"
	I0610 11:13:18.537838       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 11:13:18.635911       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:13:18.635937       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 11:13:18.637971       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [50310344784fc7f085c0a0d226fde85f9b838c4bcfeaafbde1cf90adf4432aee] <==
	I0610 11:19:08.529645       1 serving.go:380] Generated self-signed cert in-memory
	W0610 11:19:10.875009       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 11:19:10.875148       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 11:19:10.875211       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 11:19:10.875258       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 11:19:10.942932       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 11:19:10.943092       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:19:10.949867       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 11:19:10.950174       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 11:19:10.950246       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 11:19:10.950283       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 11:19:11.050461       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [9c465791f6493e7b755a5672c14ce27cf99149ae704df0b5b7ba7589cbdccd3f] <==
	E0610 11:13:02.071870       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 11:13:02.112485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 11:13:02.112516       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 11:13:02.116397       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 11:13:02.116437       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 11:13:02.116860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 11:13:02.116897       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 11:13:02.127966       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 11:13:02.128007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 11:13:02.162315       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 11:13:02.162358       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 11:13:02.179534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 11:13:02.179669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 11:13:02.259443       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 11:13:02.259558       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 11:13:02.427099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 11:13:02.427177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 11:13:02.432354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 11:13:02.432396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 11:13:02.444411       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 11:13:02.444450       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 11:13:02.549958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 11:13:02.550000       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0610 11:13:05.027573       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0610 11:17:28.690834       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.727408    3078 topology_manager.go:215] "Topology Admit Handler" podUID="7966a309-dca2-488e-b683-0ff37fa01fe3" podNamespace="kube-system" podName="storage-provisioner"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.727516    3078 topology_manager.go:215] "Topology Admit Handler" podUID="237e1205-8c4b-4234-ad0f-80e35f097827" podNamespace="default" podName="busybox-fc5497c4f-jx8f9"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.745147    3078 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.833983    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d6d1e96-ea64-4ea0-855a-0e8fedb5164d-lib-modules\") pod \"kindnet-bnpjz\" (UID: \"6d6d1e96-ea64-4ea0-855a-0e8fedb5164d\") " pod="kube-system/kindnet-bnpjz"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.834100    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7966a309-dca2-488e-b683-0ff37fa01fe3-tmp\") pod \"storage-provisioner\" (UID: \"7966a309-dca2-488e-b683-0ff37fa01fe3\") " pod="kube-system/storage-provisioner"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.834187    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d6d1e96-ea64-4ea0-855a-0e8fedb5164d-xtables-lock\") pod \"kindnet-bnpjz\" (UID: \"6d6d1e96-ea64-4ea0-855a-0e8fedb5164d\") " pod="kube-system/kindnet-bnpjz"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.835343    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6793da8-f52b-488b-a0ec-88cbf6460c13-lib-modules\") pod \"kube-proxy-gghfj\" (UID: \"d6793da8-f52b-488b-a0ec-88cbf6460c13\") " pod="kube-system/kube-proxy-gghfj"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.835474    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6d6d1e96-ea64-4ea0-855a-0e8fedb5164d-cni-cfg\") pod \"kindnet-bnpjz\" (UID: \"6d6d1e96-ea64-4ea0-855a-0e8fedb5164d\") " pod="kube-system/kindnet-bnpjz"
	Jun 10 11:19:11 multinode-862380 kubelet[3078]: I0610 11:19:11.835726    3078 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6793da8-f52b-488b-a0ec-88cbf6460c13-xtables-lock\") pod \"kube-proxy-gghfj\" (UID: \"d6793da8-f52b-488b-a0ec-88cbf6460c13\") " pod="kube-system/kube-proxy-gghfj"
	Jun 10 11:19:20 multinode-862380 kubelet[3078]: I0610 11:19:20.131198    3078 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jun 10 11:20:06 multinode-862380 kubelet[3078]: E0610 11:20:06.771336    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:20:06 multinode-862380 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:20:06 multinode-862380 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:20:06 multinode-862380 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:20:06 multinode-862380 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 11:21:06 multinode-862380 kubelet[3078]: E0610 11:21:06.771216    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:21:06 multinode-862380 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:21:06 multinode-862380 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:21:06 multinode-862380 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:21:06 multinode-862380 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 11:22:06 multinode-862380 kubelet[3078]: E0610 11:22:06.775426    3078 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 11:22:06 multinode-862380 kubelet[3078]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 11:22:06 multinode-862380 kubelet[3078]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 11:22:06 multinode-862380 kubelet[3078]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 11:22:06 multinode-862380 kubelet[3078]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:22:52.309658   42975 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19046-3880/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-862380 -n multinode-862380
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-862380 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.39s)

                                                
                                    
x
+
TestPreload (272.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-628230 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0610 11:26:57.913314   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-628230 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m9.772812148s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-628230 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-628230 image pull gcr.io/k8s-minikube/busybox: (2.785120257s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-628230
E0610 11:29:12.453709   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-628230: exit status 82 (2m0.468186203s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-628230"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-628230 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-06-10 11:30:48.0831259 +0000 UTC m=+4201.169157402
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-628230 -n test-preload-628230
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-628230 -n test-preload-628230: exit status 3 (18.48241677s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:31:06.561296   45842 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.33:22: connect: no route to host
	E0610 11:31:06.561316   45842 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.33:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-628230" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-628230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-628230
--- FAIL: TestPreload (272.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (441.88s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-685160 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0610 11:36:57.913709   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-685160 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m53.62607233s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-685160] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-685160" primary control-plane node in "kubernetes-upgrade-685160" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 11:36:45.990131   52801 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:36:45.990288   52801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:36:45.990302   52801 out.go:304] Setting ErrFile to fd 2...
	I0610 11:36:45.990310   52801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:36:45.990892   52801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:36:45.991850   52801 out.go:298] Setting JSON to false
	I0610 11:36:45.993235   52801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4747,"bootTime":1718014659,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:36:45.993310   52801 start.go:139] virtualization: kvm guest
	I0610 11:36:45.995525   52801 out.go:177] * [kubernetes-upgrade-685160] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:36:45.996883   52801 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:36:45.996902   52801 notify.go:220] Checking for updates...
	I0610 11:36:45.998728   52801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:36:46.000110   52801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:36:46.001344   52801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:36:46.002560   52801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:36:46.004005   52801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:36:46.005855   52801 config.go:182] Loaded profile config "cert-expiration-324836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:36:46.005978   52801 config.go:182] Loaded profile config "pause-761253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:36:46.006102   52801 config.go:182] Loaded profile config "running-upgrade-130010": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0610 11:36:46.006214   52801 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:36:46.047347   52801 out.go:177] * Using the kvm2 driver based on user configuration
	I0610 11:36:46.048773   52801 start.go:297] selected driver: kvm2
	I0610 11:36:46.048798   52801 start.go:901] validating driver "kvm2" against <nil>
	I0610 11:36:46.048827   52801 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:36:46.049774   52801 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:36:46.049905   52801 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 11:36:46.068996   52801 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 11:36:46.069056   52801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 11:36:46.069359   52801 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 11:36:46.069432   52801 cni.go:84] Creating CNI manager for ""
	I0610 11:36:46.069459   52801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:36:46.069470   52801 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 11:36:46.069551   52801 start.go:340] cluster config:
	{Name:kubernetes-upgrade-685160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-685160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:36:46.069688   52801 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:36:46.071707   52801 out.go:177] * Starting "kubernetes-upgrade-685160" primary control-plane node in "kubernetes-upgrade-685160" cluster
	I0610 11:36:46.073067   52801 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0610 11:36:46.073118   52801 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0610 11:36:46.073132   52801 cache.go:56] Caching tarball of preloaded images
	I0610 11:36:46.073220   52801 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:36:46.073235   52801 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0610 11:36:46.073359   52801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/config.json ...
	I0610 11:36:46.073386   52801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/config.json: {Name:mkbf64f39a00f22cf70e18ea4183084a69152e05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:36:46.073564   52801 start.go:360] acquireMachinesLock for kubernetes-upgrade-685160: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:37:09.565825   52801 start.go:364] duration metric: took 23.492211752s to acquireMachinesLock for "kubernetes-upgrade-685160"
	I0610 11:37:09.565907   52801 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-685160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-685160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 11:37:09.566023   52801 start.go:125] createHost starting for "" (driver="kvm2")
	I0610 11:37:09.568367   52801 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 11:37:09.568533   52801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:37:09.568566   52801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:37:09.584390   52801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46447
	I0610 11:37:09.584820   52801 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:37:09.585436   52801 main.go:141] libmachine: Using API Version  1
	I0610 11:37:09.585462   52801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:37:09.585806   52801 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:37:09.586004   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetMachineName
	I0610 11:37:09.586154   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .DriverName
	I0610 11:37:09.586321   52801 start.go:159] libmachine.API.Create for "kubernetes-upgrade-685160" (driver="kvm2")
	I0610 11:37:09.586361   52801 client.go:168] LocalClient.Create starting
	I0610 11:37:09.586398   52801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 11:37:09.586433   52801 main.go:141] libmachine: Decoding PEM data...
	I0610 11:37:09.586451   52801 main.go:141] libmachine: Parsing certificate...
	I0610 11:37:09.586516   52801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 11:37:09.586539   52801 main.go:141] libmachine: Decoding PEM data...
	I0610 11:37:09.586555   52801 main.go:141] libmachine: Parsing certificate...
	I0610 11:37:09.586581   52801 main.go:141] libmachine: Running pre-create checks...
	I0610 11:37:09.586594   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .PreCreateCheck
	I0610 11:37:09.586954   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetConfigRaw
	I0610 11:37:09.587426   52801 main.go:141] libmachine: Creating machine...
	I0610 11:37:09.587444   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .Create
	I0610 11:37:09.587560   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Creating KVM machine...
	I0610 11:37:09.588880   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found existing default KVM network
	I0610 11:37:09.589874   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:09.589709   53065 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4c:d0:b9} reservation:<nil>}
	I0610 11:37:09.590760   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:09.590685   53065 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fce0}
	I0610 11:37:09.590783   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | created network xml: 
	I0610 11:37:09.590799   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | <network>
	I0610 11:37:09.590812   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG |   <name>mk-kubernetes-upgrade-685160</name>
	I0610 11:37:09.590829   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG |   <dns enable='no'/>
	I0610 11:37:09.590839   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG |   
	I0610 11:37:09.590859   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0610 11:37:09.590871   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG |     <dhcp>
	I0610 11:37:09.590893   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0610 11:37:09.590912   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG |     </dhcp>
	I0610 11:37:09.590922   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG |   </ip>
	I0610 11:37:09.590930   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG |   
	I0610 11:37:09.590952   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | </network>
	I0610 11:37:09.590963   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | 
	I0610 11:37:09.596809   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | trying to create private KVM network mk-kubernetes-upgrade-685160 192.168.50.0/24...
	I0610 11:37:09.669250   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | private KVM network mk-kubernetes-upgrade-685160 192.168.50.0/24 created
	I0610 11:37:09.669288   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160 ...
	I0610 11:37:09.669304   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:09.669218   53065 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:37:09.669321   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 11:37:09.669394   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 11:37:09.909797   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:09.909599   53065 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/id_rsa...
	I0610 11:37:10.032681   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:10.032486   53065 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/kubernetes-upgrade-685160.rawdisk...
	I0610 11:37:10.032724   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Writing magic tar header
	I0610 11:37:10.032743   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Writing SSH key tar header
	I0610 11:37:10.032759   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:10.032617   53065 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160 ...
	I0610 11:37:10.032775   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160 (perms=drwx------)
	I0610 11:37:10.032792   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160
	I0610 11:37:10.032812   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 11:37:10.032826   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:37:10.032848   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 11:37:10.032865   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 11:37:10.032896   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 11:37:10.032911   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Checking permissions on dir: /home/jenkins
	I0610 11:37:10.032924   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Checking permissions on dir: /home
	I0610 11:37:10.032934   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Skipping /home - not owner
	I0610 11:37:10.032976   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 11:37:10.032997   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 11:37:10.033013   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 11:37:10.033027   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 11:37:10.033038   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Creating domain...
	I0610 11:37:10.034781   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) define libvirt domain using xml: 
	I0610 11:37:10.034800   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) <domain type='kvm'>
	I0610 11:37:10.034808   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   <name>kubernetes-upgrade-685160</name>
	I0610 11:37:10.034813   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   <memory unit='MiB'>2200</memory>
	I0610 11:37:10.034821   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   <vcpu>2</vcpu>
	I0610 11:37:10.034835   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   <features>
	I0610 11:37:10.034843   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <acpi/>
	I0610 11:37:10.034851   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <apic/>
	I0610 11:37:10.034863   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <pae/>
	I0610 11:37:10.034869   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     
	I0610 11:37:10.034877   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   </features>
	I0610 11:37:10.034884   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   <cpu mode='host-passthrough'>
	I0610 11:37:10.034895   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   
	I0610 11:37:10.034902   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   </cpu>
	I0610 11:37:10.034913   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   <os>
	I0610 11:37:10.034919   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <type>hvm</type>
	I0610 11:37:10.034948   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <boot dev='cdrom'/>
	I0610 11:37:10.034971   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <boot dev='hd'/>
	I0610 11:37:10.034979   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <bootmenu enable='no'/>
	I0610 11:37:10.034985   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   </os>
	I0610 11:37:10.034997   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   <devices>
	I0610 11:37:10.035007   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <disk type='file' device='cdrom'>
	I0610 11:37:10.035022   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/boot2docker.iso'/>
	I0610 11:37:10.035035   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <target dev='hdc' bus='scsi'/>
	I0610 11:37:10.035044   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <readonly/>
	I0610 11:37:10.035054   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     </disk>
	I0610 11:37:10.035069   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <disk type='file' device='disk'>
	I0610 11:37:10.035079   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 11:37:10.035090   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/kubernetes-upgrade-685160.rawdisk'/>
	I0610 11:37:10.035098   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <target dev='hda' bus='virtio'/>
	I0610 11:37:10.035107   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     </disk>
	I0610 11:37:10.035118   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <interface type='network'>
	I0610 11:37:10.035133   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <source network='mk-kubernetes-upgrade-685160'/>
	I0610 11:37:10.035145   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <model type='virtio'/>
	I0610 11:37:10.035157   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     </interface>
	I0610 11:37:10.035164   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <interface type='network'>
	I0610 11:37:10.035176   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <source network='default'/>
	I0610 11:37:10.035187   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <model type='virtio'/>
	I0610 11:37:10.035196   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     </interface>
	I0610 11:37:10.035207   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <serial type='pty'>
	I0610 11:37:10.035223   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <target port='0'/>
	I0610 11:37:10.035240   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     </serial>
	I0610 11:37:10.035253   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <console type='pty'>
	I0610 11:37:10.035265   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <target type='serial' port='0'/>
	I0610 11:37:10.035277   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     </console>
	I0610 11:37:10.035285   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     <rng model='virtio'>
	I0610 11:37:10.035298   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)       <backend model='random'>/dev/random</backend>
	I0610 11:37:10.035309   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     </rng>
	I0610 11:37:10.035319   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     
	I0610 11:37:10.035331   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)     
	I0610 11:37:10.035339   52801 main.go:141] libmachine: (kubernetes-upgrade-685160)   </devices>
	I0610 11:37:10.035348   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) </domain>
	I0610 11:37:10.035358   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) 
	I0610 11:37:10.039924   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:16:bf:5d in network default
	I0610 11:37:10.040714   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Ensuring networks are active...
	I0610 11:37:10.040741   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:10.041598   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Ensuring network default is active
	I0610 11:37:10.041984   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Ensuring network mk-kubernetes-upgrade-685160 is active
	I0610 11:37:10.042627   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Getting domain xml...
	I0610 11:37:10.043521   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Creating domain...
	I0610 11:37:11.339428   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Waiting to get IP...
	I0610 11:37:11.340269   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:11.340728   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:11.340806   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:11.340692   53065 retry.go:31] will retry after 269.696928ms: waiting for machine to come up
	I0610 11:37:11.612410   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:11.613012   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:11.613043   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:11.612971   53065 retry.go:31] will retry after 382.126462ms: waiting for machine to come up
	I0610 11:37:11.996718   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:11.997357   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:11.997385   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:11.997315   53065 retry.go:31] will retry after 398.080972ms: waiting for machine to come up
	I0610 11:37:12.396993   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:12.397612   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:12.397641   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:12.397574   53065 retry.go:31] will retry after 590.049821ms: waiting for machine to come up
	I0610 11:37:12.989451   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:12.990042   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:12.990069   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:12.989983   53065 retry.go:31] will retry after 715.242926ms: waiting for machine to come up
	I0610 11:37:13.706936   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:13.707399   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:13.707436   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:13.707331   53065 retry.go:31] will retry after 746.615967ms: waiting for machine to come up
	I0610 11:37:14.455350   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:14.455913   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:14.455939   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:14.455849   53065 retry.go:31] will retry after 1.088024885s: waiting for machine to come up
	I0610 11:37:15.545455   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:15.545931   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:15.545999   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:15.545914   53065 retry.go:31] will retry after 1.133073417s: waiting for machine to come up
	I0610 11:37:16.680191   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:16.680619   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:16.680651   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:16.680569   53065 retry.go:31] will retry after 1.231126743s: waiting for machine to come up
	I0610 11:37:17.913027   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:17.913555   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:17.913576   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:17.913517   53065 retry.go:31] will retry after 1.977401433s: waiting for machine to come up
	I0610 11:37:19.893147   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:19.893671   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:19.893699   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:19.893634   53065 retry.go:31] will retry after 1.753702543s: waiting for machine to come up
	I0610 11:37:21.649645   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:21.650058   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:21.650087   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:21.649994   53065 retry.go:31] will retry after 2.775909658s: waiting for machine to come up
	I0610 11:37:24.428021   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:24.428352   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:24.428377   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:24.428299   53065 retry.go:31] will retry after 4.24161356s: waiting for machine to come up
	I0610 11:37:28.673949   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:28.674540   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find current IP address of domain kubernetes-upgrade-685160 in network mk-kubernetes-upgrade-685160
	I0610 11:37:28.674564   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | I0610 11:37:28.674476   53065 retry.go:31] will retry after 4.255498179s: waiting for machine to come up
	I0610 11:37:32.932437   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:32.932832   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Found IP for machine: 192.168.50.47
	I0610 11:37:32.932848   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Reserving static IP address...
	I0610 11:37:32.932890   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has current primary IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:32.933301   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-685160", mac: "52:54:00:9b:51:fd", ip: "192.168.50.47"} in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.012889   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Getting to WaitForSSH function...
	I0610 11:37:33.012934   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Reserved static IP address: 192.168.50.47
	I0610 11:37:33.012967   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Waiting for SSH to be available...
	I0610 11:37:33.015705   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.016038   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:minikube Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:33.016068   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.016222   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Using SSH client type: external
	I0610 11:37:33.016244   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/id_rsa (-rw-------)
	I0610 11:37:33.016273   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.47 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 11:37:33.016287   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | About to run SSH command:
	I0610 11:37:33.016304   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | exit 0
	I0610 11:37:33.141167   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | SSH cmd err, output: <nil>: 
	I0610 11:37:33.141498   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) KVM machine creation complete!
	I0610 11:37:33.141834   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetConfigRaw
	I0610 11:37:33.142455   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .DriverName
	I0610 11:37:33.142674   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .DriverName
	I0610 11:37:33.142882   52801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 11:37:33.142897   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetState
	I0610 11:37:33.144379   52801 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 11:37:33.144393   52801 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 11:37:33.144398   52801 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 11:37:33.144404   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:37:33.146980   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.147371   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:33.147391   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.147547   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:37:33.147717   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:33.147899   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:33.148023   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:37:33.148209   52801 main.go:141] libmachine: Using SSH client type: native
	I0610 11:37:33.148427   52801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0610 11:37:33.148442   52801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 11:37:33.248259   52801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:37:33.248286   52801 main.go:141] libmachine: Detecting the provisioner...
	I0610 11:37:33.248300   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:37:33.251262   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.251691   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:33.251728   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.251845   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:37:33.252078   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:33.252290   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:33.252470   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:37:33.252641   52801 main.go:141] libmachine: Using SSH client type: native
	I0610 11:37:33.252859   52801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0610 11:37:33.252875   52801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 11:37:33.353426   52801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 11:37:33.353486   52801 main.go:141] libmachine: found compatible host: buildroot
	I0610 11:37:33.353496   52801 main.go:141] libmachine: Provisioning with buildroot...
	I0610 11:37:33.353507   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetMachineName
	I0610 11:37:33.353783   52801 buildroot.go:166] provisioning hostname "kubernetes-upgrade-685160"
	I0610 11:37:33.353806   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetMachineName
	I0610 11:37:33.354021   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:37:33.356544   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.356931   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:33.356987   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.357077   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:37:33.357275   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:33.357433   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:33.357556   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:37:33.357749   52801 main.go:141] libmachine: Using SSH client type: native
	I0610 11:37:33.357895   52801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0610 11:37:33.357907   52801 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-685160 && echo "kubernetes-upgrade-685160" | sudo tee /etc/hostname
	I0610 11:37:33.470553   52801 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-685160
	
	I0610 11:37:33.470586   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:37:33.473280   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.473642   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:33.473673   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.473857   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:37:33.474095   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:33.474279   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:33.474446   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:37:33.474622   52801 main.go:141] libmachine: Using SSH client type: native
	I0610 11:37:33.474818   52801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0610 11:37:33.474838   52801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-685160' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-685160/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-685160' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:37:33.590434   52801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:37:33.590463   52801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 11:37:33.590493   52801 buildroot.go:174] setting up certificates
	I0610 11:37:33.590506   52801 provision.go:84] configureAuth start
	I0610 11:37:33.590522   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetMachineName
	I0610 11:37:33.590770   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetIP
	I0610 11:37:33.593993   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.594426   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:33.594465   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.594665   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:37:33.597272   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.597637   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:33.597674   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.597834   52801 provision.go:143] copyHostCerts
	I0610 11:37:33.597897   52801 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 11:37:33.597907   52801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:37:33.597963   52801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 11:37:33.598056   52801 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 11:37:33.598066   52801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:37:33.598087   52801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 11:37:33.598139   52801 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 11:37:33.598146   52801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:37:33.598163   52801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 11:37:33.598238   52801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-685160 san=[127.0.0.1 192.168.50.47 kubernetes-upgrade-685160 localhost minikube]
	I0610 11:37:33.698234   52801 provision.go:177] copyRemoteCerts
	I0610 11:37:33.698292   52801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:37:33.698314   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:37:33.700883   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.701268   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:33.701300   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.701500   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:37:33.701719   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:33.701878   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:37:33.701993   52801 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/id_rsa Username:docker}
	I0610 11:37:33.783283   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:37:33.809693   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0610 11:37:33.833811   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 11:37:33.857869   52801 provision.go:87] duration metric: took 267.346745ms to configureAuth
	I0610 11:37:33.857897   52801 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:37:33.858088   52801 config.go:182] Loaded profile config "kubernetes-upgrade-685160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0610 11:37:33.858192   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:37:33.860888   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.861254   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:33.861291   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:33.861446   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:37:33.861653   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:33.861839   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:33.862023   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:37:33.862185   52801 main.go:141] libmachine: Using SSH client type: native
	I0610 11:37:33.862360   52801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0610 11:37:33.862380   52801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 11:37:34.141573   52801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 11:37:34.141601   52801 main.go:141] libmachine: Checking connection to Docker...
	I0610 11:37:34.141611   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetURL
	I0610 11:37:34.143063   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Using libvirt version 6000000
	I0610 11:37:34.145569   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.145959   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:34.145988   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.146236   52801 main.go:141] libmachine: Docker is up and running!
	I0610 11:37:34.146252   52801 main.go:141] libmachine: Reticulating splines...
	I0610 11:37:34.146268   52801 client.go:171] duration metric: took 24.559888717s to LocalClient.Create
	I0610 11:37:34.146292   52801 start.go:167] duration metric: took 24.55997381s to libmachine.API.Create "kubernetes-upgrade-685160"
	I0610 11:37:34.146302   52801 start.go:293] postStartSetup for "kubernetes-upgrade-685160" (driver="kvm2")
	I0610 11:37:34.146311   52801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:37:34.146332   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .DriverName
	I0610 11:37:34.146588   52801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:37:34.146617   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:37:34.149083   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.149531   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:34.149572   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.149759   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:37:34.150005   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:34.150174   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:37:34.150337   52801 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/id_rsa Username:docker}
	I0610 11:37:34.231280   52801 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:37:34.235510   52801 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:37:34.235535   52801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 11:37:34.235607   52801 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 11:37:34.235690   52801 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 11:37:34.235805   52801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:37:34.245692   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:37:34.269845   52801 start.go:296] duration metric: took 123.531777ms for postStartSetup
	I0610 11:37:34.269901   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetConfigRaw
	I0610 11:37:34.270614   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetIP
	I0610 11:37:34.273768   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.274159   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:34.274190   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.274458   52801 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/config.json ...
	I0610 11:37:34.274739   52801 start.go:128] duration metric: took 24.708701318s to createHost
	I0610 11:37:34.274769   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:37:34.277645   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.278033   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:34.278060   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.278237   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:37:34.278449   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:34.278576   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:34.278753   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:37:34.278918   52801 main.go:141] libmachine: Using SSH client type: native
	I0610 11:37:34.279088   52801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.47 22 <nil> <nil>}
	I0610 11:37:34.279100   52801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 11:37:34.385736   52801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718019454.349875725
	
	I0610 11:37:34.385757   52801 fix.go:216] guest clock: 1718019454.349875725
	I0610 11:37:34.385764   52801 fix.go:229] Guest: 2024-06-10 11:37:34.349875725 +0000 UTC Remote: 2024-06-10 11:37:34.274754561 +0000 UTC m=+48.324175996 (delta=75.121164ms)
	I0610 11:37:34.385784   52801 fix.go:200] guest clock delta is within tolerance: 75.121164ms
	I0610 11:37:34.385789   52801 start.go:83] releasing machines lock for "kubernetes-upgrade-685160", held for 24.819919973s
	I0610 11:37:34.385815   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .DriverName
	I0610 11:37:34.386172   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetIP
	I0610 11:37:34.389223   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.389738   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:34.389770   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.389946   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .DriverName
	I0610 11:37:34.390494   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .DriverName
	I0610 11:37:34.390699   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .DriverName
	I0610 11:37:34.390781   52801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:37:34.390819   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:37:34.390909   52801 ssh_runner.go:195] Run: cat /version.json
	I0610 11:37:34.390941   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:37:34.393862   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.394402   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:34.394427   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.394447   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.394481   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:37:34.394678   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:34.394856   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:34.394868   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:37:34.394882   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:34.395032   52801 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/id_rsa Username:docker}
	I0610 11:37:34.395047   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:37:34.395264   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:37:34.395452   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:37:34.395651   52801 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/id_rsa Username:docker}
	I0610 11:37:34.472785   52801 ssh_runner.go:195] Run: systemctl --version
	I0610 11:37:34.511108   52801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 11:37:34.682402   52801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 11:37:34.688336   52801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:37:34.688435   52801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:37:34.704217   52801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:37:34.704245   52801 start.go:494] detecting cgroup driver to use...
	I0610 11:37:34.704318   52801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:37:34.724923   52801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:37:34.740018   52801 docker.go:217] disabling cri-docker service (if available) ...
	I0610 11:37:34.740070   52801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 11:37:34.753840   52801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 11:37:34.767676   52801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 11:37:34.886728   52801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 11:37:35.057281   52801 docker.go:233] disabling docker service ...
	I0610 11:37:35.057374   52801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 11:37:35.071406   52801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 11:37:35.087143   52801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 11:37:35.200476   52801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 11:37:35.318774   52801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 11:37:35.332185   52801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:37:35.350463   52801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0610 11:37:35.350519   52801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:37:35.361438   52801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 11:37:35.361505   52801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:37:35.374709   52801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:37:35.388427   52801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:37:35.400839   52801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:37:35.411130   52801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:37:35.421634   52801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 11:37:35.421695   52801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 11:37:35.434327   52801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:37:35.445033   52801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:37:35.574585   52801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 11:37:35.724847   52801 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 11:37:35.724982   52801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 11:37:35.730357   52801 start.go:562] Will wait 60s for crictl version
	I0610 11:37:35.730413   52801 ssh_runner.go:195] Run: which crictl
	I0610 11:37:35.734192   52801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:37:35.773956   52801 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 11:37:35.774052   52801 ssh_runner.go:195] Run: crio --version
	I0610 11:37:35.803086   52801 ssh_runner.go:195] Run: crio --version
	I0610 11:37:35.837434   52801 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0610 11:37:35.839018   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetIP
	I0610 11:37:35.842658   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:35.843071   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:37:23 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:37:35.843104   52801 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:37:35.843323   52801 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0610 11:37:35.847620   52801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:37:35.860541   52801 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-685160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-685160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:37:35.860644   52801 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0610 11:37:35.860714   52801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:37:35.898776   52801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0610 11:37:35.898836   52801 ssh_runner.go:195] Run: which lz4
	I0610 11:37:35.902864   52801 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0610 11:37:35.906968   52801 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 11:37:35.906993   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0610 11:37:37.505109   52801 crio.go:462] duration metric: took 1.602281298s to copy over tarball
	I0610 11:37:37.505217   52801 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 11:37:40.485558   52801 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.98030662s)
	I0610 11:37:40.485586   52801 crio.go:469] duration metric: took 2.980442507s to extract the tarball
	I0610 11:37:40.485592   52801 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 11:37:40.532728   52801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:37:40.579667   52801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0610 11:37:40.579690   52801 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 11:37:40.579735   52801 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:37:40.579756   52801 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:37:40.579779   52801 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:37:40.579825   52801 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0610 11:37:40.579846   52801 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:37:40.579857   52801 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0610 11:37:40.579888   52801 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0610 11:37:40.579979   52801 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:37:40.581367   52801 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:37:40.581440   52801 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:37:40.581440   52801 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0610 11:37:40.581475   52801 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:37:40.581493   52801 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0610 11:37:40.581494   52801 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:37:40.581442   52801 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0610 11:37:40.581444   52801 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:37:40.823228   52801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0610 11:37:40.838422   52801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:37:40.838458   52801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0610 11:37:40.841277   52801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:37:40.843322   52801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0610 11:37:40.843511   52801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:37:40.851184   52801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:37:40.926088   52801 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0610 11:37:40.926126   52801 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0610 11:37:40.926162   52801 ssh_runner.go:195] Run: which crictl
	I0610 11:37:40.981878   52801 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0610 11:37:40.981928   52801 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:37:40.981934   52801 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0610 11:37:40.981964   52801 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0610 11:37:40.981976   52801 ssh_runner.go:195] Run: which crictl
	I0610 11:37:40.981882   52801 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0610 11:37:40.982004   52801 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:37:40.982010   52801 ssh_runner.go:195] Run: which crictl
	I0610 11:37:40.982065   52801 ssh_runner.go:195] Run: which crictl
	I0610 11:37:41.020891   52801 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0610 11:37:41.030829   52801 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0610 11:37:41.155651   52801 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:37:41.155651   52801 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0610 11:37:41.155705   52801 ssh_runner.go:195] Run: which crictl
	I0610 11:37:41.155705   52801 ssh_runner.go:195] Run: which crictl
	I0610 11:37:41.030934   52801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0610 11:37:41.030941   52801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:37:41.030942   52801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:37:41.030907   52801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0610 11:37:41.030868   52801 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0610 11:37:41.155981   52801 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:37:41.156033   52801 ssh_runner.go:195] Run: which crictl
	I0610 11:37:41.184534   52801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:37:41.184548   52801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0610 11:37:41.313654   52801 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0610 11:37:41.313738   52801 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:37:41.313748   52801 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0610 11:37:41.313780   52801 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0610 11:37:41.313806   52801 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0610 11:37:41.313844   52801 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0610 11:37:41.313951   52801 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0610 11:37:41.346501   52801 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0610 11:37:41.432188   52801 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:37:41.578670   52801 cache_images.go:92] duration metric: took 998.964449ms to LoadCachedImages
	W0610 11:37:41.578744   52801 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0610 11:37:41.578758   52801 kubeadm.go:928] updating node { 192.168.50.47 8443 v1.20.0 crio true true} ...
	I0610 11:37:41.578877   52801 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-685160 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-685160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:37:41.578995   52801 ssh_runner.go:195] Run: crio config
	I0610 11:37:41.629948   52801 cni.go:84] Creating CNI manager for ""
	I0610 11:37:41.629976   52801 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:37:41.629987   52801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:37:41.630030   52801 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-685160 NodeName:kubernetes-upgrade-685160 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0610 11:37:41.630201   52801 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-685160"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:37:41.630271   52801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0610 11:37:41.640392   52801 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:37:41.640458   52801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 11:37:41.649941   52801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0610 11:37:41.670318   52801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:37:41.689706   52801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0610 11:37:41.707775   52801 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I0610 11:37:41.712143   52801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.47	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:37:41.725580   52801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:37:41.847256   52801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:37:41.868256   52801 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160 for IP: 192.168.50.47
	I0610 11:37:41.868284   52801 certs.go:194] generating shared ca certs ...
	I0610 11:37:41.868305   52801 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:37:41.868520   52801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 11:37:41.868586   52801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 11:37:41.868601   52801 certs.go:256] generating profile certs ...
	I0610 11:37:41.868672   52801 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/client.key
	I0610 11:37:41.868691   52801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/client.crt with IP's: []
	I0610 11:37:42.113091   52801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/client.crt ...
	I0610 11:37:42.113122   52801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/client.crt: {Name:mk7bedea4d25d05fc144f3ccd3dec3d76e853dd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:37:42.113323   52801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/client.key ...
	I0610 11:37:42.113342   52801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/client.key: {Name:mk7d70100fdbfa63afcff1373a2e92240c08717a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:37:42.113453   52801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.key.eed85a95
	I0610 11:37:42.113488   52801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.crt.eed85a95 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.47]
	I0610 11:37:42.581963   52801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.crt.eed85a95 ...
	I0610 11:37:42.581997   52801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.crt.eed85a95: {Name:mkffe958d378f1fb3bdd98dbf4e3bc5e650f580b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:37:42.582198   52801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.key.eed85a95 ...
	I0610 11:37:42.582220   52801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.key.eed85a95: {Name:mk4f8b352e7ce13cab9610c7b0d8032960d0d201 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:37:42.582329   52801 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.crt.eed85a95 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.crt
	I0610 11:37:42.582449   52801 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.key.eed85a95 -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.key
	I0610 11:37:42.582542   52801 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/proxy-client.key
	I0610 11:37:42.582569   52801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/proxy-client.crt with IP's: []
	I0610 11:37:42.702774   52801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/proxy-client.crt ...
	I0610 11:37:42.702810   52801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/proxy-client.crt: {Name:mk6adfcedca5c6bb09532b6088cf755068271bfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:37:42.706486   52801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/proxy-client.key ...
	I0610 11:37:42.706517   52801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/proxy-client.key: {Name:mk92ebf4a76b17d042d672ddf32c90e0f1b66fe6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:37:42.706780   52801 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 11:37:42.706835   52801 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 11:37:42.706850   52801 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 11:37:42.706886   52801 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 11:37:42.706919   52801 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 11:37:42.706951   52801 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 11:37:42.707007   52801 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:37:42.707699   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:37:42.737680   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:37:42.764229   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:37:42.792422   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 11:37:42.823776   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0610 11:37:42.851428   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 11:37:42.876032   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:37:42.899592   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 11:37:42.925539   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:37:42.955853   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 11:37:42.984838   52801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 11:37:43.020211   52801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:37:43.041118   52801 ssh_runner.go:195] Run: openssl version
	I0610 11:37:43.048291   52801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:37:43.061587   52801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:37:43.066624   52801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:37:43.066692   52801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:37:43.074902   52801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:37:43.086523   52801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 11:37:43.099197   52801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 11:37:43.105715   52801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:37:43.105789   52801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 11:37:43.114343   52801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 11:37:43.126032   52801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 11:37:43.137074   52801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 11:37:43.142617   52801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:37:43.142696   52801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 11:37:43.149103   52801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:37:43.160580   52801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:37:43.166583   52801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 11:37:43.166644   52801 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-685160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-685160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:37:43.166736   52801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 11:37:43.166793   52801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:37:43.214281   52801 cri.go:89] found id: ""
	I0610 11:37:43.214347   52801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 11:37:43.224799   52801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:37:43.234549   52801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:37:43.244613   52801 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:37:43.244637   52801 kubeadm.go:156] found existing configuration files:
	
	I0610 11:37:43.244692   52801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:37:43.254766   52801 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:37:43.254892   52801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:37:43.269640   52801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:37:43.279279   52801 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:37:43.279369   52801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:37:43.289251   52801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:37:43.302183   52801 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:37:43.302262   52801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:37:43.316058   52801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:37:43.326109   52801 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:37:43.326182   52801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:37:43.339654   52801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:37:43.492021   52801 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:37:43.492110   52801 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:37:43.666840   52801 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:37:43.667102   52801 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:37:43.667266   52801 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:37:43.897570   52801 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:37:43.899658   52801 out.go:204]   - Generating certificates and keys ...
	I0610 11:37:43.899825   52801 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:37:43.899949   52801 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:37:44.027203   52801 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 11:37:44.221770   52801 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 11:37:44.666571   52801 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 11:37:44.729723   52801 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 11:37:44.933818   52801 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 11:37:44.934017   52801 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-685160 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	I0610 11:37:45.042608   52801 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 11:37:45.042926   52801 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-685160 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	I0610 11:37:45.376559   52801 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 11:37:45.583435   52801 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 11:37:45.726816   52801 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 11:37:45.727113   52801 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:37:45.811734   52801 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:37:46.023769   52801 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:37:46.144280   52801 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:37:46.756574   52801 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:37:46.778697   52801 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:37:46.780016   52801 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:37:46.780081   52801 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:37:46.915904   52801 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:37:46.917759   52801 out.go:204]   - Booting up control plane ...
	I0610 11:37:46.917893   52801 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:37:46.927637   52801 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:37:46.931423   52801 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:37:46.931565   52801 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:37:46.940496   52801 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:38:26.926757   52801 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:38:26.927556   52801 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:38:26.927806   52801 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:38:31.927444   52801 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:38:31.927681   52801 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:38:41.926855   52801 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:38:41.927120   52801 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:39:01.926708   52801 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:39:01.926894   52801 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:39:41.927413   52801 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:39:41.928070   52801 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:39:41.928119   52801 kubeadm.go:309] 
	I0610 11:39:41.928244   52801 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:39:41.928337   52801 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:39:41.928349   52801 kubeadm.go:309] 
	I0610 11:39:41.928437   52801 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:39:41.928517   52801 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:39:41.928806   52801 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:39:41.928824   52801 kubeadm.go:309] 
	I0610 11:39:41.929117   52801 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:39:41.929201   52801 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:39:41.929279   52801 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:39:41.929290   52801 kubeadm.go:309] 
	I0610 11:39:41.929522   52801 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:39:41.929694   52801 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:39:41.929707   52801 kubeadm.go:309] 
	I0610 11:39:41.929947   52801 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:39:41.930146   52801 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:39:41.930367   52801 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:39:41.930475   52801 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:39:41.930487   52801 kubeadm.go:309] 
	I0610 11:39:41.930624   52801 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:39:41.930754   52801 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:39:41.930987   52801 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0610 11:39:41.931103   52801 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-685160 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-685160 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-685160 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-685160 localhost] and IPs [192.168.50.47 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0610 11:39:41.931163   52801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:39:42.412197   52801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:39:42.426898   52801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:39:42.439412   52801 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:39:42.439435   52801 kubeadm.go:156] found existing configuration files:
	
	I0610 11:39:42.439486   52801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:39:42.452220   52801 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:39:42.452293   52801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:39:42.462194   52801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:39:42.471011   52801 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:39:42.471076   52801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:39:42.480412   52801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:39:42.488933   52801 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:39:42.489023   52801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:39:42.498000   52801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:39:42.506690   52801 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:39:42.506761   52801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:39:42.516180   52801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:39:42.763373   52801 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:41:38.978226   52801 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:41:38.978345   52801 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0610 11:41:38.979840   52801 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:41:38.979900   52801 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:41:38.980007   52801 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:41:38.980104   52801 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:41:38.980227   52801 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:41:38.980306   52801 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:41:38.981987   52801 out.go:204]   - Generating certificates and keys ...
	I0610 11:41:38.982076   52801 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:41:38.982157   52801 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:41:38.982246   52801 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:41:38.982317   52801 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:41:38.982377   52801 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:41:38.982451   52801 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:41:38.982546   52801 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:41:38.982633   52801 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:41:38.982734   52801 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:41:38.982846   52801 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:41:38.982900   52801 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:41:38.982975   52801 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:41:38.983056   52801 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:41:38.983131   52801 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:41:38.983215   52801 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:41:38.983292   52801 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:41:38.983422   52801 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:41:38.983510   52801 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:41:38.983557   52801 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:41:38.983634   52801 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:41:38.986001   52801 out.go:204]   - Booting up control plane ...
	I0610 11:41:38.986084   52801 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:41:38.986149   52801 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:41:38.986223   52801 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:41:38.986293   52801 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:41:38.986443   52801 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:41:38.986498   52801 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:41:38.986568   52801 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:41:38.986742   52801 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:41:38.986802   52801 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:41:38.986994   52801 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:41:38.987077   52801 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:41:38.987256   52801 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:41:38.987319   52801 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:41:38.987539   52801 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:41:38.987601   52801 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:41:38.987804   52801 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:41:38.987813   52801 kubeadm.go:309] 
	I0610 11:41:38.987846   52801 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:41:38.987878   52801 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:41:38.987885   52801 kubeadm.go:309] 
	I0610 11:41:38.987913   52801 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:41:38.987943   52801 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:41:38.988048   52801 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:41:38.988058   52801 kubeadm.go:309] 
	I0610 11:41:38.988158   52801 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:41:38.988196   52801 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:41:38.988227   52801 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:41:38.988237   52801 kubeadm.go:309] 
	I0610 11:41:38.988345   52801 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:41:38.988443   52801 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:41:38.988451   52801 kubeadm.go:309] 
	I0610 11:41:38.988536   52801 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:41:38.988610   52801 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:41:38.988669   52801 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:41:38.988727   52801 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:41:38.988778   52801 kubeadm.go:309] 
	I0610 11:41:38.988794   52801 kubeadm.go:393] duration metric: took 3m55.822153997s to StartCluster
	I0610 11:41:38.988831   52801 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:41:38.988884   52801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:41:39.032844   52801 cri.go:89] found id: ""
	I0610 11:41:39.032879   52801 logs.go:276] 0 containers: []
	W0610 11:41:39.032892   52801 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:41:39.032900   52801 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:41:39.032980   52801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:41:39.066833   52801 cri.go:89] found id: ""
	I0610 11:41:39.066863   52801 logs.go:276] 0 containers: []
	W0610 11:41:39.066873   52801 logs.go:278] No container was found matching "etcd"
	I0610 11:41:39.066879   52801 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:41:39.066930   52801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:41:39.101768   52801 cri.go:89] found id: ""
	I0610 11:41:39.101800   52801 logs.go:276] 0 containers: []
	W0610 11:41:39.101811   52801 logs.go:278] No container was found matching "coredns"
	I0610 11:41:39.101819   52801 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:41:39.101881   52801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:41:39.134799   52801 cri.go:89] found id: ""
	I0610 11:41:39.134821   52801 logs.go:276] 0 containers: []
	W0610 11:41:39.134828   52801 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:41:39.134834   52801 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:41:39.134884   52801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:41:39.168439   52801 cri.go:89] found id: ""
	I0610 11:41:39.168474   52801 logs.go:276] 0 containers: []
	W0610 11:41:39.168484   52801 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:41:39.168491   52801 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:41:39.168548   52801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:41:39.207800   52801 cri.go:89] found id: ""
	I0610 11:41:39.207835   52801 logs.go:276] 0 containers: []
	W0610 11:41:39.207845   52801 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:41:39.207852   52801 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:41:39.207917   52801 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:41:39.240381   52801 cri.go:89] found id: ""
	I0610 11:41:39.240414   52801 logs.go:276] 0 containers: []
	W0610 11:41:39.240425   52801 logs.go:278] No container was found matching "kindnet"
	I0610 11:41:39.240436   52801 logs.go:123] Gathering logs for kubelet ...
	I0610 11:41:39.240451   52801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:41:39.291928   52801 logs.go:123] Gathering logs for dmesg ...
	I0610 11:41:39.291961   52801 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:41:39.304865   52801 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:41:39.304890   52801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:41:39.420979   52801 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:41:39.421005   52801 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:41:39.421020   52801 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:41:39.522886   52801 logs.go:123] Gathering logs for container status ...
	I0610 11:41:39.522930   52801 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0610 11:41:39.559286   52801 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0610 11:41:39.559338   52801 out.go:239] * 
	* 
	W0610 11:41:39.559402   52801 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:41:39.559435   52801 out.go:239] * 
	* 
	W0610 11:41:39.560383   52801 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:41:39.564008   52801 out.go:177] 
	W0610 11:41:39.565210   52801 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:41:39.565274   52801 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0610 11:41:39.565303   52801 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0610 11:41:39.566907   52801 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-685160 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-685160
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-685160: (1.372326273s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-685160 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-685160 status --format={{.Host}}: exit status 7 (70.736396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-685160 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0610 11:41:57.913979   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-685160 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.997012378s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-685160 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-685160 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-685160 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (78.034929ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-685160] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-685160
	    minikube start -p kubernetes-upgrade-685160 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6851602 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-685160 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-685160 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-685160 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m42.792163118s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-06-10 11:44:04.994243566 +0000 UTC m=+4998.080275069
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-685160 -n kubernetes-upgrade-685160
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-685160 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-685160 logs -n 25: (1.031797676s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p stopped-upgrade-161665                              | stopped-upgrade-161665       | jenkins | v1.33.1 | 10 Jun 24 11:38 UTC | 10 Jun 24 11:39 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| pause   | -p pause-761253                                        | pause-761253                 | jenkins | v1.33.1 | 10 Jun 24 11:38 UTC | 10 Jun 24 11:38 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| unpause | -p pause-761253                                        | pause-761253                 | jenkins | v1.33.1 | 10 Jun 24 11:38 UTC | 10 Jun 24 11:38 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| pause   | -p pause-761253                                        | pause-761253                 | jenkins | v1.33.1 | 10 Jun 24 11:38 UTC | 10 Jun 24 11:38 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| delete  | -p pause-761253                                        | pause-761253                 | jenkins | v1.33.1 | 10 Jun 24 11:38 UTC | 10 Jun 24 11:38 UTC |
	|         | --alsologtostderr -v=5                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-324836                              | cert-expiration-324836       | jenkins | v1.33.1 | 10 Jun 24 11:38 UTC | 10 Jun 24 11:39 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-761253                                        | pause-761253                 | jenkins | v1.33.1 | 10 Jun 24 11:38 UTC | 10 Jun 24 11:38 UTC |
	| start   | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-161665                              | stopped-upgrade-161665       | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	| start   | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:40 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-324836                              | cert-expiration-324836       | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-036579 | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	|         | disable-driver-mounts-036579                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-832735            | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC | 10 Jun 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	| addons  | enable metrics-server -p no-preload-298179             | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-832735                 | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-166693        | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-298179                  | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 11:42:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 11:42:59.914817   56769 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:42:59.915044   56769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:42:59.915053   56769 out.go:304] Setting ErrFile to fd 2...
	I0610 11:42:59.915057   56769 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:42:59.915233   56769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:42:59.915731   56769 out.go:298] Setting JSON to false
	I0610 11:42:59.916628   56769 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5121,"bootTime":1718014659,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:42:59.916683   56769 start.go:139] virtualization: kvm guest
	I0610 11:42:59.919276   56769 out.go:177] * [embed-certs-832735] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:42:59.920823   56769 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:42:59.920872   56769 notify.go:220] Checking for updates...
	I0610 11:42:59.922405   56769 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:42:59.924016   56769 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:42:59.925522   56769 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:42:59.926949   56769 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:42:59.928459   56769 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:42:59.930280   56769 config.go:182] Loaded profile config "embed-certs-832735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:42:59.930889   56769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:42:59.930969   56769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:42:59.945590   56769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39355
	I0610 11:42:59.946063   56769 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:42:59.946653   56769 main.go:141] libmachine: Using API Version  1
	I0610 11:42:59.946689   56769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:42:59.947113   56769 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:42:59.947326   56769 main.go:141] libmachine: (embed-certs-832735) Calling .DriverName
	I0610 11:42:59.947673   56769 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:42:59.948126   56769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:42:59.948180   56769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:42:59.963647   56769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33343
	I0610 11:42:59.964064   56769 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:42:59.964490   56769 main.go:141] libmachine: Using API Version  1
	I0610 11:42:59.964517   56769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:42:59.964830   56769 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:42:59.965096   56769 main.go:141] libmachine: (embed-certs-832735) Calling .DriverName
	I0610 11:43:00.000013   56769 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 11:43:00.001543   56769 start.go:297] selected driver: kvm2
	I0610 11:43:00.001569   56769 start.go:901] validating driver "kvm2" against &{Name:embed-certs-832735 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:embed-certs-832735 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:43:00.001664   56769 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:43:00.002371   56769 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:43:00.002445   56769 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 11:43:00.017957   56769 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 11:43:00.018417   56769 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:43:00.018483   56769 cni.go:84] Creating CNI manager for ""
	I0610 11:43:00.018501   56769 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:43:00.018561   56769 start.go:340] cluster config:
	{Name:embed-certs-832735 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-832735 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.19 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:43:00.018677   56769 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:43:00.020796   56769 out.go:177] * Starting "embed-certs-832735" primary control-plane node in "embed-certs-832735" cluster
	I0610 11:43:00.022057   56769 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:43:00.022093   56769 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 11:43:00.022103   56769 cache.go:56] Caching tarball of preloaded images
	I0610 11:43:00.022179   56769 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:43:00.022189   56769 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 11:43:00.022274   56769 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/embed-certs-832735/config.json ...
	I0610 11:43:00.022453   56769 start.go:360] acquireMachinesLock for embed-certs-832735: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:43:00.022494   56769 start.go:364] duration metric: took 24.013µs to acquireMachinesLock for "embed-certs-832735"
	I0610 11:43:00.022507   56769 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:43:00.022514   56769 fix.go:54] fixHost starting: 
	I0610 11:43:00.022768   56769 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:43:00.022801   56769 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:43:00.037545   56769 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43699
	I0610 11:43:00.038051   56769 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:43:00.038526   56769 main.go:141] libmachine: Using API Version  1
	I0610 11:43:00.038551   56769 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:43:00.038826   56769 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:43:00.039025   56769 main.go:141] libmachine: (embed-certs-832735) Calling .DriverName
	I0610 11:43:00.039164   56769 main.go:141] libmachine: (embed-certs-832735) Calling .GetState
	I0610 11:43:00.040822   56769 fix.go:112] recreateIfNeeded on embed-certs-832735: state=Running err=<nil>
	W0610 11:43:00.040843   56769 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:43:00.043671   56769 out.go:177] * Updating the running kvm2 "embed-certs-832735" VM ...
	I0610 11:43:00.046483   56769 machine.go:94] provisionDockerMachine start ...
	I0610 11:43:00.046534   56769 main.go:141] libmachine: (embed-certs-832735) Calling .DriverName
	I0610 11:43:00.046826   56769 main.go:141] libmachine: (embed-certs-832735) Calling .GetSSHHostname
	I0610 11:43:00.049375   56769 main.go:141] libmachine: (embed-certs-832735) DBG | domain embed-certs-832735 has defined MAC address 52:54:00:db:f7:d7 in network mk-embed-certs-832735
	I0610 11:43:00.049798   56769 main.go:141] libmachine: (embed-certs-832735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:f7:d7", ip: ""} in network mk-embed-certs-832735: {Iface:virbr4 ExpiryTime:2024-06-10 12:39:33 +0000 UTC Type:0 Mac:52:54:00:db:f7:d7 Iaid: IPaddr:192.168.61.19 Prefix:24 Hostname:embed-certs-832735 Clientid:01:52:54:00:db:f7:d7}
	I0610 11:43:00.049823   56769 main.go:141] libmachine: (embed-certs-832735) DBG | domain embed-certs-832735 has defined IP address 192.168.61.19 and MAC address 52:54:00:db:f7:d7 in network mk-embed-certs-832735
	I0610 11:43:00.050000   56769 main.go:141] libmachine: (embed-certs-832735) Calling .GetSSHPort
	I0610 11:43:00.050195   56769 main.go:141] libmachine: (embed-certs-832735) Calling .GetSSHKeyPath
	I0610 11:43:00.050378   56769 main.go:141] libmachine: (embed-certs-832735) Calling .GetSSHKeyPath
	I0610 11:43:00.050523   56769 main.go:141] libmachine: (embed-certs-832735) Calling .GetSSHUsername
	I0610 11:43:00.050664   56769 main.go:141] libmachine: Using SSH client type: native
	I0610 11:43:00.050907   56769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.61.19 22 <nil> <nil>}
	I0610 11:43:00.050924   56769 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:43:02.945407   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:06.017211   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:12.097337   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:15.169301   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:21.578083   54458 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:43:21.578196   54458 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0610 11:43:21.579875   54458 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:43:21.579936   54458 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:43:21.580027   54458 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:43:21.580111   54458 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:43:21.580225   54458 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:43:21.580305   54458 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:43:21.582111   54458 out.go:204]   - Generating certificates and keys ...
	I0610 11:43:21.582186   54458 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:43:21.582243   54458 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:43:21.582349   54458 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:43:21.582436   54458 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:43:21.582530   54458 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:43:21.582616   54458 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:43:21.582704   54458 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:43:21.582789   54458 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:43:21.582892   54458 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:43:21.582993   54458 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:43:21.583048   54458 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:43:21.583124   54458 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:43:21.583221   54458 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:43:21.583286   54458 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:43:21.583352   54458 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:43:21.583400   54458 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:43:21.583526   54458 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:43:21.583624   54458 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:43:21.583681   54458 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:43:21.583775   54458 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:43:21.585365   54458 out.go:204]   - Booting up control plane ...
	I0610 11:43:21.585466   54458 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:43:21.585553   54458 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:43:21.585629   54458 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:43:21.585725   54458 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:43:21.585918   54458 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:43:21.585967   54458 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:43:21.586043   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:43:21.586248   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:43:21.586346   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:43:21.586561   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:43:21.586621   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:43:21.586809   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:43:21.586903   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:43:21.587107   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:43:21.587169   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:43:21.587329   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:43:21.587336   54458 kubeadm.go:309] 
	I0610 11:43:21.587369   54458 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:43:21.587409   54458 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:43:21.587423   54458 kubeadm.go:309] 
	I0610 11:43:21.587474   54458 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:43:21.587504   54458 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:43:21.587594   54458 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:43:21.587602   54458 kubeadm.go:309] 
	I0610 11:43:21.587707   54458 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:43:21.587763   54458 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:43:21.587810   54458 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:43:21.587821   54458 kubeadm.go:309] 
	I0610 11:43:21.587904   54458 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:43:21.587972   54458 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:43:21.587981   54458 kubeadm.go:309] 
	I0610 11:43:21.588126   54458 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:43:21.588238   54458 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:43:21.588341   54458 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:43:21.588419   54458 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:43:21.588442   54458 kubeadm.go:309] 
	I0610 11:43:21.588497   54458 kubeadm.go:393] duration metric: took 3m55.072180081s to StartCluster
	I0610 11:43:21.588544   54458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:43:21.588598   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:43:21.630767   54458 cri.go:89] found id: ""
	I0610 11:43:21.630799   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.630810   54458 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:43:21.630817   54458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:43:21.630889   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:43:21.669373   54458 cri.go:89] found id: ""
	I0610 11:43:21.669402   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.669410   54458 logs.go:278] No container was found matching "etcd"
	I0610 11:43:21.669423   54458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:43:21.669472   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:43:21.701513   54458 cri.go:89] found id: ""
	I0610 11:43:21.701545   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.701556   54458 logs.go:278] No container was found matching "coredns"
	I0610 11:43:21.701562   54458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:43:21.701631   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:43:21.734867   54458 cri.go:89] found id: ""
	I0610 11:43:21.734902   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.734910   54458 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:43:21.734916   54458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:43:21.734972   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:43:21.770768   54458 cri.go:89] found id: ""
	I0610 11:43:21.770798   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.770806   54458 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:43:21.770812   54458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:43:21.770861   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:43:21.803556   54458 cri.go:89] found id: ""
	I0610 11:43:21.803578   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.803586   54458 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:43:21.803594   54458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:43:21.803658   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:43:21.836121   54458 cri.go:89] found id: ""
	I0610 11:43:21.836162   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.836171   54458 logs.go:278] No container was found matching "kindnet"
	I0610 11:43:21.836192   54458 logs.go:123] Gathering logs for kubelet ...
	I0610 11:43:21.836206   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:43:21.885236   54458 logs.go:123] Gathering logs for dmesg ...
	I0610 11:43:21.885272   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:43:21.898167   54458 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:43:21.898197   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:43:22.006483   54458 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:43:22.006510   54458 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:43:22.006527   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:43:22.095983   54458 logs.go:123] Gathering logs for container status ...
	I0610 11:43:22.096022   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0610 11:43:22.142642   54458 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0610 11:43:22.142699   54458 out.go:239] * 
	W0610 11:43:22.142772   54458 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:43:22.142804   54458 out.go:239] * 
	W0610 11:43:22.144023   54458 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:43:22.147617   54458 out.go:177] 
	W0610 11:43:22.149005   54458 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:43:22.149072   54458 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0610 11:43:22.149099   54458 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0610 11:43:22.150533   54458 out.go:177] 
	I0610 11:43:21.249223   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:24.321287   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:33.441277   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:36.513278   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:42.593281   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:45.665258   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:51.745197   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:54.817269   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:43:55.923663   56499 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.230677197s)
	I0610 11:43:55.923699   56499 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 11:43:55.923764   56499 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 11:43:55.929261   56499 start.go:562] Will wait 60s for crictl version
	I0610 11:43:55.929313   56499 ssh_runner.go:195] Run: which crictl
	I0610 11:43:55.932823   56499 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:43:55.978733   56499 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 11:43:55.978834   56499 ssh_runner.go:195] Run: crio --version
	I0610 11:43:56.006371   56499 ssh_runner.go:195] Run: crio --version
	I0610 11:43:56.035531   56499 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 11:43:56.036981   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetIP
	I0610 11:43:56.039771   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:43:56.040209   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:41:51 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:43:56.040243   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:43:56.040455   56499 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0610 11:43:56.044486   56499 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-685160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:kubernetes-upgrade-685160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:43:56.044613   56499 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:43:56.044674   56499 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:43:56.084261   56499 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 11:43:56.084285   56499 crio.go:433] Images already preloaded, skipping extraction
	I0610 11:43:56.084329   56499 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:43:56.118213   56499 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 11:43:56.118235   56499 cache_images.go:84] Images are preloaded, skipping loading
	I0610 11:43:56.118243   56499 kubeadm.go:928] updating node { 192.168.50.47 8443 v1.30.1 crio true true} ...
	I0610 11:43:56.118342   56499 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-685160 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.47
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-685160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:43:56.118402   56499 ssh_runner.go:195] Run: crio config
	I0610 11:43:56.166832   56499 cni.go:84] Creating CNI manager for ""
	I0610 11:43:56.166852   56499 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:43:56.166860   56499 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:43:56.166879   56499 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.47 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-685160 NodeName:kubernetes-upgrade-685160 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.47"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.47 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 11:43:56.167016   56499 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.47
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-685160"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.47
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.47"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:43:56.167083   56499 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:43:56.176389   56499 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:43:56.176449   56499 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 11:43:56.185064   56499 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0610 11:43:56.200932   56499 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:43:56.220906   56499 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0610 11:43:56.240566   56499 ssh_runner.go:195] Run: grep 192.168.50.47	control-plane.minikube.internal$ /etc/hosts
	I0610 11:43:56.245153   56499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:43:56.367286   56499 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:43:56.386952   56499 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160 for IP: 192.168.50.47
	I0610 11:43:56.386981   56499 certs.go:194] generating shared ca certs ...
	I0610 11:43:56.387007   56499 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:43:56.387185   56499 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 11:43:56.387227   56499 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 11:43:56.387239   56499 certs.go:256] generating profile certs ...
	I0610 11:43:56.387327   56499 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/client.key
	I0610 11:43:56.387369   56499 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.key.eed85a95
	I0610 11:43:56.387400   56499 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/proxy-client.key
	I0610 11:43:56.387511   56499 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 11:43:56.387539   56499 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 11:43:56.387548   56499 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 11:43:56.387566   56499 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 11:43:56.387587   56499 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 11:43:56.387607   56499 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 11:43:56.387641   56499 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:43:56.388337   56499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:43:56.411460   56499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:43:56.435089   56499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:43:56.458399   56499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 11:43:56.480670   56499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0610 11:43:56.504887   56499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 11:43:56.529530   56499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:43:56.553296   56499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 11:43:56.576409   56499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 11:43:56.601005   56499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:43:56.622985   56499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 11:43:56.646250   56499 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:43:56.662543   56499 ssh_runner.go:195] Run: openssl version
	I0610 11:43:56.668411   56499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 11:43:56.680136   56499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 11:43:56.684596   56499 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:43:56.684657   56499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 11:43:56.690202   56499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 11:43:56.699444   56499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 11:43:56.709894   56499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 11:43:56.714435   56499 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:43:56.714492   56499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 11:43:56.720029   56499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:43:56.729474   56499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:43:56.739616   56499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:43:56.743655   56499 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:43:56.743701   56499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:43:56.749545   56499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:43:56.759048   56499 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:43:56.763186   56499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 11:43:56.768410   56499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 11:43:56.773741   56499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 11:43:56.778905   56499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 11:43:56.784203   56499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 11:43:56.789284   56499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 11:43:56.794548   56499 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-685160 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.1 ClusterName:kubernetes-upgrade-685160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:43:56.794634   56499 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 11:43:56.794671   56499 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:43:56.830263   56499 cri.go:89] found id: "be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75"
	I0610 11:43:56.830289   56499 cri.go:89] found id: "453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a"
	I0610 11:43:56.830295   56499 cri.go:89] found id: "1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953"
	I0610 11:43:56.830299   56499 cri.go:89] found id: "01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b"
	I0610 11:43:56.830302   56499 cri.go:89] found id: "b317fe402bd0fda08e4972b16dffe4d26a4674e6a14a8b1dbbcf3d719dabdf54"
	I0610 11:43:56.830305   56499 cri.go:89] found id: "41fc9f941fd333fd30ae2391894770405032413ca1cf05cc39fd49f2474e016b"
	I0610 11:43:56.830307   56499 cri.go:89] found id: "2e46375440b99062f69922e4b1704044f2edddafca7ec7de8b3e1870c9a3dc0f"
	I0610 11:43:56.830309   56499 cri.go:89] found id: ""
	I0610 11:43:56.830368   56499 ssh_runner.go:195] Run: sudo runc list -f json
	I0610 11:43:56.859573   56499 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b/userdata","rootfs":"/var/lib/containers/storage/overlay/9d965a13b429e555d3a265bb2ceedc5ca6c08b535b24e575bc72968090d0fbe8/merged","created":"2024-06-10T11:42:09.88490473Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ac6c6b5e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ac6c6b5e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.te
rminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-06-10T11:42:09.768096921Z","io.kubernetes.cri-o.Image":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.30.1","io.kubernetes.cri-o.ImageRef":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-685160\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"aafbd4ab61f8e53adaa6142da976f4ea\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-685160_aafbd4ab61f8e53adaa6142da976f4ea/kube-controller-manager
/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9d965a13b429e555d3a265bb2ceedc5ca6c08b535b24e575bc72968090d0fbe8/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":
"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/aafbd4ab61f8e53adaa6142da976f4ea/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/aafbd4ab61f8e53adaa6142da976f4ea/containers/kube-controller-manager/a03896c7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux
_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"aafbd4ab61f8e53adaa6142da976f4ea","kubernetes.io/config.hash":"aafbd4ab61f8e53adaa6142da976f4ea","kubernetes.io/config.seen":"2024-06-10T11:42:06.051381835Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba/userdata","rootfs":"/var/lib/containers/storage/overlay/f750f0ce2c7ad65d9badaa00fec2b39e1cb9da1c0eb082115d9dcdc7c662af73/merged","created":"2024-
06-10T11:42:24.355650869Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.hash\":\"8b5e1485b94ff9518ef578d91c769ba1\",\"kubernetes.io/config.seen\":\"2024-06-10T11:42:06.051383154Z\",\"kubernetes.io/config.source\":\"file\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod8b5e1485b94ff9518ef578d91c769ba1","io.kubernetes.cri-o.ContainerID":"163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-kubernetes-upgrade-685160_kube-system_8b5e1485b94ff9518ef578d91c769ba1_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-06-10T11:42:24.205704515Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-685160","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/163b45040b287f35d25420302ebd436b3d8600777faa50a7416710
591da652ba/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-685160","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"8b5e1485b94ff9518ef578d91c769ba1\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-685160\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-685160_8b5e1485b94ff9518ef578d91c769ba1/163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-685160\",\"uid\":\"8b5e1485b94ff9518ef578d91c769ba1\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f750f0ce2c7ad65d9badaa00fec2b39e1cb9da
1c0eb082115d9dcdc7c662af73/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-685160_kube-system_8b5e1485b94ff9518ef578d91c769ba1_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-685160_kube-system_8b5e1485b94ff9518ef578d91c769ba1_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kuber
netes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"8b5e1485b94ff9518ef578d91c769ba1","kubernetes.io/config.hash":"8b5e1485b94ff9518ef578d91c769ba1","kubernetes.io/config.seen":"2024-06-10T11:42:06.051383154Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953/userdata","rootfs":"/var/lib/containers/storage/overlay/db4ab350c5c3fa6f710d8dbb7af8199fa4dc08bc245b3626a432ccafeb90be5c/merged","created":"2024-06-10T11:42:24.559513293Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a82063cc","
io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a82063cc\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-06-10T11:42:24.440814249Z","io.kubernetes.cri-o.Image":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.12-0","io.kubernetes.cri-o.ImageRef":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.Lab
els":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-685160\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"15f78c5c54990e96ad18b39482c096da\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-685160_15f78c5c54990e96ad18b39482c096da/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/db4ab350c5c3fa6f710d8dbb7af8199fa4dc08bc245b3626a432ccafeb90be5c/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-kubernetes-upgrade-685160_kube-system_15f78c5c54990e96ad18b39482c096da_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc","io.kubernetes.cri-o.SandboxN
ame":"k8s_etcd-kubernetes-upgrade-685160_kube-system_15f78c5c54990e96ad18b39482c096da_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/15f78c5c54990e96ad18b39482c096da/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/15f78c5c54990e96ad18b39482c096da/containers/etcd/73caa43d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"et
cd-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"15f78c5c54990e96ad18b39482c096da","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.47:2379","kubernetes.io/config.hash":"15f78c5c54990e96ad18b39482c096da","kubernetes.io/config.seen":"2024-06-10T11:42:06.113868320Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2e46375440b99062f69922e4b1704044f2edddafca7ec7de8b3e1870c9a3dc0f","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/2e46375440b99062f69922e4b1704044f2edddafca7ec7de8b3e1870c9a3dc0f/userdata","rootfs":"/var/lib/containers/storage/overlay/0edce50174f854c388be172b10303429965d42166650accd2ccb3d1ee2b51ac8/merged","created":"2024-06-10T11:42:09.731144741Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1d26206c","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCo
unt":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1d26206c\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2e46375440b99062f69922e4b1704044f2edddafca7ec7de8b3e1870c9a3dc0f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-06-10T11:42:09.676447356Z","io.kubernetes.cri-o.Image":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.30.1","io.kubernetes.cri-o.ImageRef":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"i
o.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-685160\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0b179713a28fd80c0cc32c3b0caf57c6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-685160_0b179713a28fd80c0cc32c3b0caf57c6/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0edce50174f854c388be172b10303429965d42166650accd2ccb3d1ee2b51ac8/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-685160_kube-system_0b179713a28fd80c0cc32c3b0caf57c6_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/7118d9ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7118d9ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91","io.kubernetes.cri-o.SandboxName":"
k8s_kube-apiserver-kubernetes-upgrade-685160_kube-system_0b179713a28fd80c0cc32c3b0caf57c6_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0b179713a28fd80c0cc32c3b0caf57c6/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0b179713a28fd80c0cc32c3b0caf57c6/containers/kube-apiserver/ef6f40ef\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikub
e/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0b179713a28fd80c0cc32c3b0caf57c6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.47:8443","kubernetes.io/config.hash":"0b179713a28fd80c0cc32c3b0caf57c6","kubernetes.io/config.seen":"2024-06-10T11:42:06.051377434Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3/userdata","rootfs":"/var/lib/containers/storage/overlay/9d113b3960d29180f0e7ac1134f0216eee379d47895939c4b535693269281248/merged","created":"2024-06-10T11:42:09.656135543Z","annotat
ions":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-06-10T11:42:06.051381835Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"aafbd4ab61f8e53adaa6142da976f4ea\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podaafbd4ab61f8e53adaa6142da976f4ea","io.kubernetes.cri-o.ContainerID":"3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-06-10T11:42:09.550014013Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-685160","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3/userda
ta/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-685160","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernetes-upgrade-685160\",\"tier\":\"control-plane\",\"component\":\"kube-controller-manager\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"aafbd4ab61f8e53adaa6142da976f4ea\",\"io.kubernetes.pod.namespace\":\"kube-system\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-685160_aafbd4ab61f8e53adaa6142da976f4ea/3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-685160\",\"uid\":\"aafbd4ab61f8e53adaa6142da976f4ea\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9d113b3960d29180f0e7ac11
34f0216eee379d47895939c4b535693269281248/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_0","io.kubernetes.cri-o.SeccompPro
filePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"aafbd4ab61f8e53adaa6142da976f4ea","kubernetes.io/config.hash":"aafbd4ab61f8e53adaa6142da976f4ea","kubernetes.io/config.seen":"2024-06-10T11:42:06.051381835Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"41fc9f941fd333fd30ae2391894770405032413ca1cf05cc39fd49f2474e016b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/41fc9f941fd333fd30ae2391894770405032413ca1cf05cc39fd49f2474e016b/userdata","rootfs":"/var/lib/containers/storage/overlay/b8dccc69fe1070e031d93866bc852985a1baf23cbce347e54a19a88db34aca92/merged","created":"2024-06-10T11:42:09.891107115Z","annotations":{"io.container.manager":"cri-o","i
o.kubernetes.container.hash":"200064a4","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"200064a4\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"41fc9f941fd333fd30ae2391894770405032413ca1cf05cc39fd49f2474e016b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-06-10T11:42:09.735019818Z","io.kubernetes.cri-o.Image":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.30.1","io.kubernetes.cri-o.ImageRef":"a52dc94f0a91256bde86a1c3027a16
336bb8fea9304f9311987066307996f035","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-685160\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8b5e1485b94ff9518ef578d91c769ba1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-685160_8b5e1485b94ff9518ef578d91c769ba1/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b8dccc69fe1070e031d93866bc852985a1baf23cbce347e54a19a88db34aca92/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-685160_kube-system_8b5e1485b94ff9518ef578d91c769ba1_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921/userdata/resolv.conf","io.kubernet
es.cri-o.SandboxID":"cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-685160_kube-system_8b5e1485b94ff9518ef578d91c769ba1_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8b5e1485b94ff9518ef578d91c769ba1/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8b5e1485b94ff9518ef578d91c769ba1/containers/kube-scheduler/d767db8b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-
kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8b5e1485b94ff9518ef578d91c769ba1","kubernetes.io/config.hash":"8b5e1485b94ff9518ef578d91c769ba1","kubernetes.io/config.seen":"2024-06-10T11:42:06.051383154Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a/userdata","rootfs":"/var/lib/containers/storage/overlay/f4626685b58f0bc37fd4c98cfc0e1b18e1052535d8ee9e040e5ee553fd6f0ca0/merged","created":"2024-06-10T11:42:24.524832353Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"1d26206c","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","i
o.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"1d26206c\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-06-10T11:42:24.45614085Z","io.kubernetes.cri-o.Image":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.30.1","io.kubernetes.cri-o.ImageRef":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-685160\",\"io.kubernetes
.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"0b179713a28fd80c0cc32c3b0caf57c6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-685160_0b179713a28fd80c0cc32c3b0caf57c6/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f4626685b58f0bc37fd4c98cfc0e1b18e1052535d8ee9e040e5ee553fd6f0ca0/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-kubernetes-upgrade-685160_kube-system_0b179713a28fd80c0cc32c3b0caf57c6_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-685160_kube-system_0b179713a28fd
80c0cc32c3b0caf57c6_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/0b179713a28fd80c0cc32c3b0caf57c6/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/0b179713a28fd80c0cc32c3b0caf57c6/containers/kube-apiserver/ba905eba\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,
\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"0b179713a28fd80c0cc32c3b0caf57c6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.47:8443","kubernetes.io/config.hash":"0b179713a28fd80c0cc32c3b0caf57c6","kubernetes.io/config.seen":"2024-06-10T11:42:06.051377434Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60/userdata","rootfs":"/var/lib/containers/storage/overlay/659f22d34694b09a324b3423361cc079f9e7af7b1cec322261851c577e9fb93f/merged","created":"2024-06-10T11:42:24.331049863Z","annotations":{"component":"kube-apiserver","io.container.manager":"cri-o","io
.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"0b179713a28fd80c0cc32c3b0caf57c6\",\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.50.47:8443\",\"kubernetes.io/config.seen\":\"2024-06-10T11:42:06.051377434Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod0b179713a28fd80c0cc32c3b0caf57c6","io.kubernetes.cri-o.ContainerID":"55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-685160_kube-system_0b179713a28fd80c0cc32c3b0caf57c6_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-06-10T11:42:24.212464956Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-685160","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60/use
rdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-685160","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-685160\",\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"0b179713a28fd80c0cc32c3b0caf57c6\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-685160_0b179713a28fd80c0cc32c3b0caf57c6/55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-685160\",\"uid\":\"0b179713a28fd80c0cc32c3b0caf57c6\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/659f22d34694b09a324b3423361cc079f9e7af7b1cec32226185
1c577e9fb93f/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-685160_kube-system_0b179713a28fd80c0cc32c3b0caf57c6_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-685160_kube-system_0b179713a28fd80c0cc32c3b0caf57c6_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.Sh
mPath":"/var/run/containers/storage/overlay-containers/55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"0b179713a28fd80c0cc32c3b0caf57c6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.47:8443","kubernetes.io/config.hash":"0b179713a28fd80c0cc32c3b0caf57c6","kubernetes.io/config.seen":"2024-06-10T11:42:06.051377434Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7118d9ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/7118d9ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91/userdata","rootfs":"/var/lib/containers/storage/overlay/4b28cbed5065edc9eb5aebc192ae7dd80c542944e17c5339daeec4f29df337ec/merged","created":"2024-06-10T11:42:09.600883267Z","annotations":{"
component":"kube-apiserver","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\":\"192.168.50.47:8443\",\"kubernetes.io/config.seen\":\"2024-06-10T11:42:06.051377434Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"0b179713a28fd80c0cc32c3b0caf57c6\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod0b179713a28fd80c0cc32c3b0caf57c6","io.kubernetes.cri-o.ContainerID":"7118d9ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-apiserver-kubernetes-upgrade-685160_kube-system_0b179713a28fd80c0cc32c3b0caf57c6_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-06-10T11:42:09.530531399Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-685160","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/7118d9
ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-apiserver-kubernetes-upgrade-685160","io.kubernetes.cri-o.Labels":"{\"tier\":\"control-plane\",\"component\":\"kube-apiserver\",\"io.kubernetes.pod.uid\":\"0b179713a28fd80c0cc32c3b0caf57c6\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-apiserver-kubernetes-upgrade-685160\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-kubernetes-upgrade-685160_0b179713a28fd80c0cc32c3b0caf57c6/7118d9ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver-kubernetes-upgrade-685160\",\"uid\":\"0b179713a28fd80c0cc32c3b0caf57c6\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4b28
cbed5065edc9eb5aebc192ae7dd80c542944e17c5339daeec4f29df337ec/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver-kubernetes-upgrade-685160_kube-system_0b179713a28fd80c0cc32c3b0caf57c6_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":256,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/7118d9ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"7118d9ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-kubernetes-upgrade-685160_kube-system_0b179713a28fd80c0cc32c3b0caf57c6_0","io.kubernetes.cri-o.SeccompP
rofilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/7118d9ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91/userdata/shm","io.kubernetes.pod.name":"kube-apiserver-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"0b179713a28fd80c0cc32c3b0caf57c6","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.50.47:8443","kubernetes.io/config.hash":"0b179713a28fd80c0cc32c3b0caf57c6","kubernetes.io/config.seen":"2024-06-10T11:42:06.051377434Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021ab16489b56","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021ab16489b56/userdata","rootfs":"/var/lib/containers/storage/overlay/e819496cdfef5abbf59defd4881844b108daa997f2bc3b52d19e4b63e1965264/merged","created":"
2024-06-10T11:42:09.61291193Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"15f78c5c54990e96ad18b39482c096da\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.50.47:2379\",\"kubernetes.io/config.seen\":\"2024-06-10T11:42:06.113868320Z\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod15f78c5c54990e96ad18b39482c096da","io.kubernetes.cri-o.ContainerID":"a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021ab16489b56","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-685160_kube-system_15f78c5c54990e96ad18b39482c096da_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-06-10T11:42:09.533089305Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-685160","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overl
ay-containers/a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021ab16489b56/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-685160","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-685160\",\"tier\":\"control-plane\",\"component\":\"etcd\",\"io.kubernetes.container.name\":\"POD\",\"io.kubernetes.pod.uid\":\"15f78c5c54990e96ad18b39482c096da\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-685160_15f78c5c54990e96ad18b39482c096da/a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021ab16489b56.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-685160\",\"uid\":\"15f78c5c54990e96ad18b39482c096da\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e819496cdfef5abbf59defd4881844b108
daa997f2bc3b52d19e4b63e1965264/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-685160_kube-system_15f78c5c54990e96ad18b39482c096da_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021ab16489b56/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021ab16489b56","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-685160_kube-system_15f78c5c54990e96ad18b39482c096da_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmP
ath":"/var/run/containers/storage/overlay-containers/a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021ab16489b56/userdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"15f78c5c54990e96ad18b39482c096da","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.47:2379","kubernetes.io/config.hash":"15f78c5c54990e96ad18b39482c096da","kubernetes.io/config.seen":"2024-06-10T11:42:06.113868320Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b317fe402bd0fda08e4972b16dffe4d26a4674e6a14a8b1dbbcf3d719dabdf54","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b317fe402bd0fda08e4972b16dffe4d26a4674e6a14a8b1dbbcf3d719dabdf54/userdata","rootfs":"/var/lib/containers/storage/overlay/361fdcb0118bb27f7ace8332550f653b368c2efcf3ebdb5948d9b0bcde1803ed/merged","created":"2024-06-10T11:42:09.837342998Z","annotations":{"io.container.manage
r":"cri-o","io.kubernetes.container.hash":"a82063cc","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a82063cc\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b317fe402bd0fda08e4972b16dffe4d26a4674e6a14a8b1dbbcf3d719dabdf54","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-06-10T11:42:09.757385681Z","io.kubernetes.cri-o.Image":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.12-0","io.kubernetes.cri-o.ImageRef":"3861cfcd7c04ccac1f062788eca394872485
27ef0c0cfd477a83d7691a75a899","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-kubernetes-upgrade-685160\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"15f78c5c54990e96ad18b39482c096da\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-685160_15f78c5c54990e96ad18b39482c096da/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/361fdcb0118bb27f7ace8332550f653b368c2efcf3ebdb5948d9b0bcde1803ed/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-kubernetes-upgrade-685160_kube-system_15f78c5c54990e96ad18b39482c096da_0","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021ab16489b56/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021a
b16489b56","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-685160_kube-system_15f78c5c54990e96ad18b39482c096da_0","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/15f78c5c54990e96ad18b39482c096da/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/15f78c5c54990e96ad18b39482c096da/containers/etcd/ea6b5a94\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel
\":false}]","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"15f78c5c54990e96ad18b39482c096da","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.47:2379","kubernetes.io/config.hash":"15f78c5c54990e96ad18b39482c096da","kubernetes.io/config.seen":"2024-06-10T11:42:06.113868320Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75/userdata","rootfs":"/var/lib/containers/storage/overlay/efaa729e3646fab522ed1a860e2d8e02f620cd454012c1a3d58776bc141a5341/merged","created":"2024-06-10T11:42:24.58957367Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"200064a4","io.kubernetes.container.name":"kube-schedu
ler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"200064a4\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2024-06-10T11:42:24.497005217Z","io.kubernetes.cri-o.Image":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.30.1","io.kubernetes.cri-o.ImageRef":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.
container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-685160\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8b5e1485b94ff9518ef578d91c769ba1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-685160_8b5e1485b94ff9518ef578d91c769ba1/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efaa729e3646fab522ed1a860e2d8e02f620cd454012c1a3d58776bc141a5341/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-kubernetes-upgrade-685160_kube-system_8b5e1485b94ff9518ef578d91c769ba1_1","io.kubernetes.cri-o.PlatformRuntimePath":"","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"163b45040b287f35d25420302ebd436b3d8600777faa50a7
416710591da652ba","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-685160_kube-system_8b5e1485b94ff9518ef578d91c769ba1_1","io.kubernetes.cri-o.SeccompProfilePath":"RuntimeDefault","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8b5e1485b94ff9518ef578d91c769ba1/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8b5e1485b94ff9518ef578d91c769ba1/containers/kube-scheduler/aa22bf63\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system
","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8b5e1485b94ff9518ef578d91c769ba1","kubernetes.io/config.hash":"8b5e1485b94ff9518ef578d91c769ba1","kubernetes.io/config.seen":"2024-06-10T11:42:06.051383154Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921/userdata","rootfs":"/var/lib/containers/storage/overlay/178ede71cd8fe09c2158607e5aeabf965661553c43745707220daa3e8dedd963/merged","created":"2024-06-10T11:42:09.638074847Z","annotations":{"component":"kube-scheduler","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"8b5e1485b94ff9518ef578d91c769ba1\",\"kubernetes.io/config.seen\":\"2024-06-10T11:42:06.051383154Z\"}"
,"io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/pod8b5e1485b94ff9518ef578d91c769ba1","io.kubernetes.cri-o.ContainerID":"cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-scheduler-kubernetes-upgrade-685160_kube-system_8b5e1485b94ff9518ef578d91c769ba1_0","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-06-10T11:42:09.532095527Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-685160","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-scheduler-kubernetes-upgrade-685160","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.pod.uid\":\"8b5e1485b94ff9518ef578d91c769ba1\",\"io.kubernetes.pod.namespace\":\"k
ube-system\",\"io.kubernetes.pod.name\":\"kube-scheduler-kubernetes-upgrade-685160\",\"io.kubernetes.container.name\":\"POD\",\"tier\":\"control-plane\",\"component\":\"kube-scheduler\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-kubernetes-upgrade-685160_8b5e1485b94ff9518ef578d91c769ba1/cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler-kubernetes-upgrade-685160\",\"uid\":\"8b5e1485b94ff9518ef578d91c769ba1\",\"namespace\":\"kube-system\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/178ede71cd8fe09c2158607e5aeabf965661553c43745707220daa3e8dedd963/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler-kubernetes-upgrade-685160_kube-system_8b5e1485b94ff9518ef578d91c769ba1_0","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_pe
riod\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-kubernetes-upgrade-685160_kube-system_8b5e1485b94ff9518ef578d91c769ba1_0","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921/userdata/shm","io.kubernetes.pod.name":"kube-scheduler-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"8b5e1485b94ff9518ef578d91c769ba1","kubernetes.io/config.hash":"8b5e1485b94ff95
18ef578d91c769ba1","kubernetes.io/config.seen":"2024-06-10T11:42:06.051383154Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc/userdata","rootfs":"/var/lib/containers/storage/overlay/5a7ccf0e14c573125c82ba54ceb0916ad374d92bdf39b2ef77c7b817763acc43/merged","created":"2024-06-10T11:42:24.370867419Z","annotations":{"component":"etcd","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-06-10T11:42:06.113868320Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"15f78c5c54990e96ad18b39482c096da\",\"kubeadm.kubernetes.io/etcd.advertise-client-urls\":\"https://192.168.50.47:2379\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable
/pod15f78c5c54990e96ad18b39482c096da","io.kubernetes.cri-o.ContainerID":"cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc","io.kubernetes.cri-o.ContainerName":"k8s_POD_etcd-kubernetes-upgrade-685160_kube-system_15f78c5c54990e96ad18b39482c096da_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-06-10T11:42:24.2504387Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-685160","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"etcd-kubernetes-upgrade-685160","io.kubernetes.cri-o.Labels":"{\"component\":\"etcd\",\"io.kubernetes.pod.uid\":\"15f78c5c54990e96ad18b39482c096da\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"etcd-kubernet
es-upgrade-685160\",\"tier\":\"control-plane\",\"io.kubernetes.container.name\":\"POD\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-kubernetes-upgrade-685160_15f78c5c54990e96ad18b39482c096da/cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd-kubernetes-upgrade-685160\",\"uid\":\"15f78c5c54990e96ad18b39482c096da\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/5a7ccf0e14c573125c82ba54ceb0916ad374d92bdf39b2ef77c7b817763acc43/merged","io.kubernetes.cri-o.Name":"k8s_etcd-kubernetes-upgrade-685160_kube-system_15f78c5c54990e96ad18b39482c096da_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":102,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.PortMappings":"[]
","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc","io.kubernetes.cri-o.SandboxName":"k8s_etcd-kubernetes-upgrade-685160_kube-system_15f78c5c54990e96ad18b39482c096da_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc/userdata/shm","io.kubernetes.pod.name":"etcd-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"15f78c5c54990e96ad18b39482c096da","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.50.47:2379","kubernetes.io/config.hash":"15f78c5c54990e96ad18b39482c096da","kubernetes.io/config.seen":"2024-0
6-10T11:42:06.113868320Z","kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d6b8c8ceefaeb45197b924586c45e109b473e2e7469fe3cc9468cff395c0c6e9","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d6b8c8ceefaeb45197b924586c45e109b473e2e7469fe3cc9468cff395c0c6e9/userdata","rootfs":"/var/lib/containers/storage/overlay/3ee893a184a4e782486ea6ff015f03650907569ec863e94984141ba2c585c2dc/merged","created":"2024-06-10T11:42:24.29815876Z","annotations":{"component":"kube-controller-manager","io.container.manager":"cri-o","io.kubernetes.container.name":"POD","io.kubernetes.cri-o.Annotations":"{\"kubernetes.io/config.seen\":\"2024-06-10T11:42:06.051381835Z\",\"kubernetes.io/config.source\":\"file\",\"kubernetes.io/config.hash\":\"aafbd4ab61f8e53adaa6142da976f4ea\"}","io.kubernetes.cri-o.CgroupParent":"/kubepods/burstable/podaafbd4ab61f8e53adaa6142da976f4ea","io.kubernetes.cri-o.ContainerID":"d6b8c8ceefaeb45197b924586c45e109b473e2e7469fe3c
c9468cff395c0c6e9","io.kubernetes.cri-o.ContainerName":"k8s_POD_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1","io.kubernetes.cri-o.ContainerType":"sandbox","io.kubernetes.cri-o.Created":"2024-06-10T11:42:24.234725785Z","io.kubernetes.cri-o.HostName":"kubernetes-upgrade-685160","io.kubernetes.cri-o.HostNetwork":"true","io.kubernetes.cri-o.HostnamePath":"/var/run/containers/storage/overlay-containers/d6b8c8ceefaeb45197b924586c45e109b473e2e7469fe3cc9468cff395c0c6e9/userdata/hostname","io.kubernetes.cri-o.Image":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/pause:3.9","io.kubernetes.cri-o.KubeName":"kube-controller-manager-kubernetes-upgrade-685160","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"POD\",\"component\":\"kube-controller-manager\",\"io.kubernetes.pod.uid\":\"aafbd4ab61f8e53adaa6142da976f4ea\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.name\":\"kube-controller-manager-kubernete
s-upgrade-685160\",\"tier\":\"control-plane\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-685160_aafbd4ab61f8e53adaa6142da976f4ea/d6b8c8ceefaeb45197b924586c45e109b473e2e7469fe3cc9468cff395c0c6e9.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager-kubernetes-upgrade-685160\",\"uid\":\"aafbd4ab61f8e53adaa6142da976f4ea\",\"namespace\":\"kube-system\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/3ee893a184a4e782486ea6ff015f03650907569ec863e94984141ba2c585c2dc/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1","io.kubernetes.cri-o.Namespace":"kube-system","io.kubernetes.cri-o.NamespaceOptions":"{\"network\":2,\"pid\":1}","io.kubernetes.cri-o.PodLinuxOverhead":"{}","io.kubernetes.cri-o.PodLinuxResources":"{\"cpu_period\":100000,\"cpu_shares\":204,\"unified\":{\"memory.oom.group\":\"1\"}}","io.kubernetes.cri-o.Po
rtMappings":"[]","io.kubernetes.cri-o.PrivilegedRuntime":"true","io.kubernetes.cri-o.ResolvPath":"/var/run/containers/storage/overlay-containers/d6b8c8ceefaeb45197b924586c45e109b473e2e7469fe3cc9468cff395c0c6e9/userdata/resolv.conf","io.kubernetes.cri-o.RuntimeHandler":"","io.kubernetes.cri-o.SandboxID":"d6b8c8ceefaeb45197b924586c45e109b473e2e7469fe3cc9468cff395c0c6e9","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1","io.kubernetes.cri-o.SeccompProfilePath":"Unconfined","io.kubernetes.cri-o.ShmPath":"/var/run/containers/storage/overlay-containers/d6b8c8ceefaeb45197b924586c45e109b473e2e7469fe3cc9468cff395c0c6e9/userdata/shm","io.kubernetes.pod.name":"kube-controller-manager-kubernetes-upgrade-685160","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"aafbd4ab61f8e53adaa6142da976f4ea","kubernetes.io/config.hash":"aafbd4ab61f8e53adaa6142da976f4ea","kubernetes.io/config.seen":"2024-06-10T11:42:06.051381835Z","
kubernetes.io/config.source":"file","tier":"control-plane"},"owner":"root"}]
	I0610 11:43:56.860216   56499 cri.go:126] list returned 15 containers
	I0610 11:43:56.860235   56499 cri.go:129] container: {ID:01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b Status:stopped}
	I0610 11:43:56.860253   56499 cri.go:135] skipping {01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b stopped}: state = "stopped", want "paused"
	I0610 11:43:56.860259   56499 cri.go:129] container: {ID:163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba Status:stopped}
	I0610 11:43:56.860265   56499 cri.go:131] skipping 163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba - not in ps
	I0610 11:43:56.860273   56499 cri.go:129] container: {ID:1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953 Status:stopped}
	I0610 11:43:56.860278   56499 cri.go:135] skipping {1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953 stopped}: state = "stopped", want "paused"
	I0610 11:43:56.860285   56499 cri.go:129] container: {ID:2e46375440b99062f69922e4b1704044f2edddafca7ec7de8b3e1870c9a3dc0f Status:stopped}
	I0610 11:43:56.860290   56499 cri.go:135] skipping {2e46375440b99062f69922e4b1704044f2edddafca7ec7de8b3e1870c9a3dc0f stopped}: state = "stopped", want "paused"
	I0610 11:43:56.860294   56499 cri.go:129] container: {ID:3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3 Status:stopped}
	I0610 11:43:56.860299   56499 cri.go:131] skipping 3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3 - not in ps
	I0610 11:43:56.860302   56499 cri.go:129] container: {ID:41fc9f941fd333fd30ae2391894770405032413ca1cf05cc39fd49f2474e016b Status:stopped}
	I0610 11:43:56.860311   56499 cri.go:135] skipping {41fc9f941fd333fd30ae2391894770405032413ca1cf05cc39fd49f2474e016b stopped}: state = "stopped", want "paused"
	I0610 11:43:56.860314   56499 cri.go:129] container: {ID:453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a Status:stopped}
	I0610 11:43:56.860318   56499 cri.go:135] skipping {453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a stopped}: state = "stopped", want "paused"
	I0610 11:43:56.860324   56499 cri.go:129] container: {ID:55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60 Status:stopped}
	I0610 11:43:56.860329   56499 cri.go:131] skipping 55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60 - not in ps
	I0610 11:43:56.860339   56499 cri.go:129] container: {ID:7118d9ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91 Status:stopped}
	I0610 11:43:56.860344   56499 cri.go:131] skipping 7118d9ee2751e716ad055f170ba9eda58ebba855164db6c45196445544433c91 - not in ps
	I0610 11:43:56.860350   56499 cri.go:129] container: {ID:a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021ab16489b56 Status:stopped}
	I0610 11:43:56.860353   56499 cri.go:131] skipping a2c616e4300d658fe25f90a03741d1fb3a95718df1741e881c8021ab16489b56 - not in ps
	I0610 11:43:56.860357   56499 cri.go:129] container: {ID:b317fe402bd0fda08e4972b16dffe4d26a4674e6a14a8b1dbbcf3d719dabdf54 Status:stopped}
	I0610 11:43:56.860363   56499 cri.go:135] skipping {b317fe402bd0fda08e4972b16dffe4d26a4674e6a14a8b1dbbcf3d719dabdf54 stopped}: state = "stopped", want "paused"
	I0610 11:43:56.860367   56499 cri.go:129] container: {ID:be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75 Status:stopped}
	I0610 11:43:56.860372   56499 cri.go:135] skipping {be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75 stopped}: state = "stopped", want "paused"
	I0610 11:43:56.860379   56499 cri.go:129] container: {ID:cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921 Status:stopped}
	I0610 11:43:56.860383   56499 cri.go:131] skipping cb3e21e785086e64b3348e781b2046c68cf8361814a570ea6bcf3ecf576cf921 - not in ps
	I0610 11:43:56.860389   56499 cri.go:129] container: {ID:cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc Status:stopped}
	I0610 11:43:56.860393   56499 cri.go:131] skipping cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc - not in ps
	I0610 11:43:56.860399   56499 cri.go:129] container: {ID:d6b8c8ceefaeb45197b924586c45e109b473e2e7469fe3cc9468cff395c0c6e9 Status:stopped}
	I0610 11:43:56.860402   56499 cri.go:131] skipping d6b8c8ceefaeb45197b924586c45e109b473e2e7469fe3cc9468cff395c0c6e9 - not in ps
	I0610 11:43:56.860446   56499 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0610 11:43:56.869902   56499 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 11:43:56.869926   56499 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 11:43:56.869933   56499 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 11:43:56.869983   56499 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 11:43:56.878976   56499 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 11:43:56.880403   56499 kubeconfig.go:125] found "kubernetes-upgrade-685160" server: "https://192.168.50.47:8443"
	I0610 11:43:56.882676   56499 kapi.go:59] client config for kubernetes-upgrade-685160: &rest.Config{Host:"https://192.168.50.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/client.crt", KeyFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/client.key", CAFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 11:43:56.883595   56499 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 11:43:56.892341   56499 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.47
	I0610 11:43:56.892374   56499 kubeadm.go:1154] stopping kube-system containers ...
	I0610 11:43:56.892388   56499 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0610 11:43:56.892438   56499 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:43:56.934687   56499 cri.go:89] found id: "be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75"
	I0610 11:43:56.934713   56499 cri.go:89] found id: "453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a"
	I0610 11:43:56.934717   56499 cri.go:89] found id: "1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953"
	I0610 11:43:56.934720   56499 cri.go:89] found id: "01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b"
	I0610 11:43:56.934723   56499 cri.go:89] found id: "b317fe402bd0fda08e4972b16dffe4d26a4674e6a14a8b1dbbcf3d719dabdf54"
	I0610 11:43:56.934726   56499 cri.go:89] found id: "41fc9f941fd333fd30ae2391894770405032413ca1cf05cc39fd49f2474e016b"
	I0610 11:43:56.934728   56499 cri.go:89] found id: "2e46375440b99062f69922e4b1704044f2edddafca7ec7de8b3e1870c9a3dc0f"
	I0610 11:43:56.934731   56499 cri.go:89] found id: ""
	I0610 11:43:56.934742   56499 cri.go:234] Stopping containers: [be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75 453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a 1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953 01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b b317fe402bd0fda08e4972b16dffe4d26a4674e6a14a8b1dbbcf3d719dabdf54 41fc9f941fd333fd30ae2391894770405032413ca1cf05cc39fd49f2474e016b 2e46375440b99062f69922e4b1704044f2edddafca7ec7de8b3e1870c9a3dc0f]
	I0610 11:43:56.934786   56499 ssh_runner.go:195] Run: which crictl
	I0610 11:43:56.938698   56499 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75 453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a 1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953 01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b b317fe402bd0fda08e4972b16dffe4d26a4674e6a14a8b1dbbcf3d719dabdf54 41fc9f941fd333fd30ae2391894770405032413ca1cf05cc39fd49f2474e016b 2e46375440b99062f69922e4b1704044f2edddafca7ec7de8b3e1870c9a3dc0f
	I0610 11:43:57.016887   56499 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 11:43:57.055318   56499 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:43:57.065796   56499 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Jun 10 11:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jun 10 11:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Jun 10 11:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun 10 11:42 /etc/kubernetes/scheduler.conf
	
	I0610 11:43:57.065860   56499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:43:57.075210   56499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:43:57.084756   56499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:43:57.095980   56499 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 11:43:57.096041   56499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:43:57.106698   56499 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:43:57.117172   56499 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 11:43:57.117237   56499 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:43:57.126357   56499 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:43:57.135614   56499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:43:57.201131   56499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:43:58.339984   56499 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138816561s)
	I0610 11:43:58.340027   56499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:43:58.541098   56499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:43:58.610508   56499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:43:58.691621   56499 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:43:58.691710   56499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:43:59.192101   56499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:43:59.692074   56499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:43:59.706657   56499 api_server.go:72] duration metric: took 1.015034563s to wait for apiserver process to appear ...
	I0610 11:43:59.706681   56499 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:43:59.706702   56499 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0610 11:44:02.188446   56499 api_server.go:279] https://192.168.50.47:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 11:44:02.188477   56499 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 11:44:02.188489   56499 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0610 11:44:02.299182   56499 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:44:02.299211   56499 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:44:02.299223   56499 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0610 11:44:02.307601   56499 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:44:02.307631   56499 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:44:02.706817   56499 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0610 11:44:02.711318   56499 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:44:02.711353   56499 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:44:03.206851   56499 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0610 11:44:03.211465   56499 api_server.go:279] https://192.168.50.47:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:44:03.211489   56499 api_server.go:103] status: https://192.168.50.47:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:44:03.707255   56499 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0610 11:44:03.711441   56499 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0610 11:44:03.718264   56499 api_server.go:141] control plane version: v1.30.1
	I0610 11:44:03.718318   56499 api_server.go:131] duration metric: took 4.01162963s to wait for apiserver health ...
	I0610 11:44:03.718329   56499 cni.go:84] Creating CNI manager for ""
	I0610 11:44:03.718335   56499 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:44:03.720399   56499 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 11:44:03.722165   56499 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 11:44:03.732406   56499 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 11:44:03.749987   56499 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:44:03.759163   56499 system_pods.go:59] 5 kube-system pods found
	I0610 11:44:03.759198   56499 system_pods.go:61] "etcd-kubernetes-upgrade-685160" [cf27bfeb-288c-47b1-8f3a-b50f988b52ed] Running
	I0610 11:44:03.759213   56499 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-685160" [de77a21e-d762-4216-8b23-810fd33676e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 11:44:03.759222   56499 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-685160" [7c7aa32e-b106-4d04-8f57-155b30189836] Running
	I0610 11:44:03.759231   56499 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-685160" [bc011c93-caec-4e8c-9441-1bc90f3abfbb] Running
	I0610 11:44:03.759250   56499 system_pods.go:61] "storage-provisioner" [4061c992-b7d2-4474-b709-94f104f79056] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0610 11:44:03.759261   56499 system_pods.go:74] duration metric: took 9.255521ms to wait for pod list to return data ...
	I0610 11:44:03.759270   56499 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:44:03.762079   56499 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:44:03.762109   56499 node_conditions.go:123] node cpu capacity is 2
	I0610 11:44:03.762128   56499 node_conditions.go:105] duration metric: took 2.852511ms to run NodePressure ...
	I0610 11:44:03.762148   56499 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:44:04.069066   56499 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 11:44:04.079581   56499 ops.go:34] apiserver oom_adj: -16
	I0610 11:44:04.079597   56499 kubeadm.go:591] duration metric: took 7.209658649s to restartPrimaryControlPlane
	I0610 11:44:04.079605   56499 kubeadm.go:393] duration metric: took 7.285063612s to StartCluster
	I0610 11:44:04.079619   56499 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:44:04.079697   56499 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:44:04.081044   56499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:44:04.081305   56499 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.47 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 11:44:04.083101   56499 out.go:177] * Verifying Kubernetes components...
	I0610 11:44:04.081479   56499 config.go:182] Loaded profile config "kubernetes-upgrade-685160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:44:04.081459   56499 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 11:44:04.084566   56499 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-685160"
	I0610 11:44:04.084598   56499 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-685160"
	I0610 11:44:04.084605   56499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:44:04.084611   56499 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-685160"
	I0610 11:44:04.084633   56499 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-685160"
	W0610 11:44:04.084606   56499 addons.go:243] addon storage-provisioner should already be in state true
	I0610 11:44:04.084700   56499 host.go:66] Checking if "kubernetes-upgrade-685160" exists ...
	I0610 11:44:04.084975   56499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:44:04.085006   56499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:44:04.085073   56499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:44:04.085092   56499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:44:04.100354   56499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41685
	I0610 11:44:04.100777   56499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35535
	I0610 11:44:04.100898   56499 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:44:04.101200   56499 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:44:04.101379   56499 main.go:141] libmachine: Using API Version  1
	I0610 11:44:04.101397   56499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:44:04.101701   56499 main.go:141] libmachine: Using API Version  1
	I0610 11:44:04.101721   56499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:44:04.101733   56499 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:44:04.102016   56499 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:44:04.102178   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetState
	I0610 11:44:04.102312   56499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:44:04.102345   56499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:44:04.104836   56499 kapi.go:59] client config for kubernetes-upgrade-685160: &rest.Config{Host:"https://192.168.50.47:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/client.crt", KeyFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/profiles/kubernetes-upgrade-685160/client.key", CAFile:"/home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfaf80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 11:44:04.105086   56499 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-685160"
	W0610 11:44:04.105100   56499 addons.go:243] addon default-storageclass should already be in state true
	I0610 11:44:04.105121   56499 host.go:66] Checking if "kubernetes-upgrade-685160" exists ...
	I0610 11:44:04.105350   56499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:44:04.105374   56499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:44:04.118140   56499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I0610 11:44:04.118607   56499 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:44:04.119155   56499 main.go:141] libmachine: Using API Version  1
	I0610 11:44:04.119182   56499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:44:04.119569   56499 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:44:04.119918   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetState
	I0610 11:44:04.120566   56499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41995
	I0610 11:44:04.120992   56499 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:44:04.121509   56499 main.go:141] libmachine: Using API Version  1
	I0610 11:44:04.121537   56499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:44:04.121665   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .DriverName
	I0610 11:44:04.123894   56499 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:44:00.897194   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:44:03.969210   56769 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.19:22: connect: no route to host
	I0610 11:44:04.121928   56499 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:44:04.125599   56499 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:44:04.125615   56499 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 11:44:04.125639   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:44:04.126084   56499 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:44:04.126139   56499 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:44:04.128743   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:44:04.129280   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:41:51 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:44:04.129312   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:44:04.129473   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:44:04.129698   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:44:04.129885   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:44:04.130095   56499 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/id_rsa Username:docker}
	I0610 11:44:04.142323   56499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39745
	I0610 11:44:04.142691   56499 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:44:04.143174   56499 main.go:141] libmachine: Using API Version  1
	I0610 11:44:04.143198   56499 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:44:04.143501   56499 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:44:04.143684   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetState
	I0610 11:44:04.145236   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .DriverName
	I0610 11:44:04.145441   56499 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 11:44:04.145456   56499 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 11:44:04.145471   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHHostname
	I0610 11:44:04.148738   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:44:04.149324   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:51:fd", ip: ""} in network mk-kubernetes-upgrade-685160: {Iface:virbr1 ExpiryTime:2024-06-10 12:41:51 +0000 UTC Type:0 Mac:52:54:00:9b:51:fd Iaid: IPaddr:192.168.50.47 Prefix:24 Hostname:kubernetes-upgrade-685160 Clientid:01:52:54:00:9b:51:fd}
	I0610 11:44:04.149355   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | domain kubernetes-upgrade-685160 has defined IP address 192.168.50.47 and MAC address 52:54:00:9b:51:fd in network mk-kubernetes-upgrade-685160
	I0610 11:44:04.149499   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHPort
	I0610 11:44:04.149685   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHKeyPath
	I0610 11:44:04.149813   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .GetSSHUsername
	I0610 11:44:04.149926   56499 sshutil.go:53] new ssh client: &{IP:192.168.50.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/kubernetes-upgrade-685160/id_rsa Username:docker}
	I0610 11:44:04.242189   56499 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:44:04.257549   56499 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:44:04.257625   56499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:44:04.269652   56499 api_server.go:72] duration metric: took 188.31286ms to wait for apiserver process to appear ...
	I0610 11:44:04.269676   56499 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:44:04.269693   56499 api_server.go:253] Checking apiserver healthz at https://192.168.50.47:8443/healthz ...
	I0610 11:44:04.273543   56499 api_server.go:279] https://192.168.50.47:8443/healthz returned 200:
	ok
	I0610 11:44:04.274608   56499 api_server.go:141] control plane version: v1.30.1
	I0610 11:44:04.274635   56499 api_server.go:131] duration metric: took 4.951128ms to wait for apiserver health ...
	I0610 11:44:04.274645   56499 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:44:04.278542   56499 system_pods.go:59] 5 kube-system pods found
	I0610 11:44:04.278562   56499 system_pods.go:61] "etcd-kubernetes-upgrade-685160" [cf27bfeb-288c-47b1-8f3a-b50f988b52ed] Running
	I0610 11:44:04.278568   56499 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-685160" [de77a21e-d762-4216-8b23-810fd33676e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 11:44:04.278573   56499 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-685160" [7c7aa32e-b106-4d04-8f57-155b30189836] Running
	I0610 11:44:04.278579   56499 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-685160" [bc011c93-caec-4e8c-9441-1bc90f3abfbb] Running
	I0610 11:44:04.278583   56499 system_pods.go:61] "storage-provisioner" [4061c992-b7d2-4474-b709-94f104f79056] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0610 11:44:04.278590   56499 system_pods.go:74] duration metric: took 3.939023ms to wait for pod list to return data ...
	I0610 11:44:04.278601   56499 kubeadm.go:576] duration metric: took 197.263766ms to wait for: map[apiserver:true system_pods:true]
	I0610 11:44:04.278624   56499 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:44:04.280740   56499 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:44:04.280753   56499 node_conditions.go:123] node cpu capacity is 2
	I0610 11:44:04.280761   56499 node_conditions.go:105] duration metric: took 2.132118ms to run NodePressure ...
	I0610 11:44:04.280771   56499 start.go:240] waiting for startup goroutines ...
	I0610 11:44:04.345571   56499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:44:04.362295   56499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 11:44:04.922593   56499 main.go:141] libmachine: Making call to close driver server
	I0610 11:44:04.922621   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .Close
	I0610 11:44:04.922661   56499 main.go:141] libmachine: Making call to close driver server
	I0610 11:44:04.922674   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .Close
	I0610 11:44:04.922948   56499 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:44:04.922965   56499 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:44:04.922973   56499 main.go:141] libmachine: Making call to close driver server
	I0610 11:44:04.922982   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .Close
	I0610 11:44:04.923006   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Closing plugin on server side
	I0610 11:44:04.923049   56499 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:44:04.923068   56499 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:44:04.923078   56499 main.go:141] libmachine: Making call to close driver server
	I0610 11:44:04.923089   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .Close
	I0610 11:44:04.923199   56499 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:44:04.923214   56499 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:44:04.923228   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Closing plugin on server side
	I0610 11:44:04.923346   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Closing plugin on server side
	I0610 11:44:04.923369   56499 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:44:04.923387   56499 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:44:04.928644   56499 main.go:141] libmachine: Making call to close driver server
	I0610 11:44:04.928664   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) Calling .Close
	I0610 11:44:04.928890   56499 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:44:04.928906   56499 main.go:141] libmachine: (kubernetes-upgrade-685160) DBG | Closing plugin on server side
	I0610 11:44:04.928907   56499 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:44:04.931710   56499 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0610 11:44:04.932898   56499 addons.go:510] duration metric: took 851.441661ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0610 11:44:04.932929   56499 start.go:245] waiting for cluster config update ...
	I0610 11:44:04.932943   56499 start.go:254] writing updated cluster config ...
	I0610 11:44:04.933218   56499 ssh_runner.go:195] Run: rm -f paused
	I0610 11:44:04.980128   56499 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:44:04.981950   56499 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-685160" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.623842125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718019845623819141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20f65376-3c19-4023-b0a1-41c30d471af3 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.624418371Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89626031-2b94-4d5e-b037-1d038aeea83d name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.624469435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89626031-2b94-4d5e-b037-1d038aeea83d name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.624648285Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bce3b57fbce89be63765fdeb2736cc742a85ba9274532189e63b3b3a908be9af,PodSandboxId:ff74750497d5d997ecfd4086fd7fa63321b8034de3a0d7f448dd21360e6b7eab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718019839381598831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f78c5c54990e96ad18b39482c096da,},Annotations:map[string]string{io.kubernetes.container.hash: a82063cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a164acb87c5e0db01c859f5099b012674acb611c7f7f78114dd27e004c969be,PodSandboxId:14efdf8130d4585c6d66dd50718e3a424b2d799b75d154e9bff2699cc82860df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718019839345348522,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5e1485b94ff9518ef578d91c769ba1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb05e37f4e7466b88ffc382853964d41b371223656a5ab5ab6c1f1919282fb37,PodSandboxId:a099667d573685ef6796fd7b5fc29b29743d15377041905911a7c8a42d9f3018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718019839282722811,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b179713a28fd80c0cc32c3b0caf57c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1d26206c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75,PodSandboxId:163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718019744497005217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5e1485b94ff9518ef578d91c769ba1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953,PodSandboxId:cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718019744440814249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f78c5c54990e96ad18b39482c096da,},Annotations:map[string]string{io.kubernetes.container.hash: a82063cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a,PodSandboxId:55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718019744456140850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b179713a28fd80c0cc32c3b0caf57c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1d26206c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b,PodSandboxId:3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718019729768096921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aafbd4ab61f8e53adaa6142da976f4ea,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89626031-2b94-4d5e-b037-1d038aeea83d name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.658685386Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e27c1879-3e25-44d7-b5c7-81a6abeb4687 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.658755115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e27c1879-3e25-44d7-b5c7-81a6abeb4687 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.659829190Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dfbd4e00-7682-41c4-9bcd-ce49a5a047ff name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.660256076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718019845660176341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dfbd4e00-7682-41c4-9bcd-ce49a5a047ff name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.660853430Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05232255-e73b-48f0-be41-192cadd9c6d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.660916109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05232255-e73b-48f0-be41-192cadd9c6d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.661070977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bce3b57fbce89be63765fdeb2736cc742a85ba9274532189e63b3b3a908be9af,PodSandboxId:ff74750497d5d997ecfd4086fd7fa63321b8034de3a0d7f448dd21360e6b7eab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718019839381598831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f78c5c54990e96ad18b39482c096da,},Annotations:map[string]string{io.kubernetes.container.hash: a82063cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a164acb87c5e0db01c859f5099b012674acb611c7f7f78114dd27e004c969be,PodSandboxId:14efdf8130d4585c6d66dd50718e3a424b2d799b75d154e9bff2699cc82860df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718019839345348522,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5e1485b94ff9518ef578d91c769ba1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb05e37f4e7466b88ffc382853964d41b371223656a5ab5ab6c1f1919282fb37,PodSandboxId:a099667d573685ef6796fd7b5fc29b29743d15377041905911a7c8a42d9f3018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718019839282722811,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b179713a28fd80c0cc32c3b0caf57c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1d26206c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75,PodSandboxId:163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718019744497005217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5e1485b94ff9518ef578d91c769ba1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953,PodSandboxId:cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718019744440814249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f78c5c54990e96ad18b39482c096da,},Annotations:map[string]string{io.kubernetes.container.hash: a82063cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a,PodSandboxId:55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718019744456140850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b179713a28fd80c0cc32c3b0caf57c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1d26206c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b,PodSandboxId:3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718019729768096921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aafbd4ab61f8e53adaa6142da976f4ea,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05232255-e73b-48f0-be41-192cadd9c6d7 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.695465105Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54b147a1-4bd6-44b1-8449-e3cc9897eb3e name=/runtime.v1.RuntimeService/Version
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.695545027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54b147a1-4bd6-44b1-8449-e3cc9897eb3e name=/runtime.v1.RuntimeService/Version
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.696714301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fc71568-c96a-4403-9952-5e681932b243 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.697184703Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718019845697158983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fc71568-c96a-4403-9952-5e681932b243 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.697725264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=816b2ad9-8c42-40af-b066-f842b5ebec46 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.697830957Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=816b2ad9-8c42-40af-b066-f842b5ebec46 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.698159819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bce3b57fbce89be63765fdeb2736cc742a85ba9274532189e63b3b3a908be9af,PodSandboxId:ff74750497d5d997ecfd4086fd7fa63321b8034de3a0d7f448dd21360e6b7eab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718019839381598831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f78c5c54990e96ad18b39482c096da,},Annotations:map[string]string{io.kubernetes.container.hash: a82063cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a164acb87c5e0db01c859f5099b012674acb611c7f7f78114dd27e004c969be,PodSandboxId:14efdf8130d4585c6d66dd50718e3a424b2d799b75d154e9bff2699cc82860df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718019839345348522,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5e1485b94ff9518ef578d91c769ba1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb05e37f4e7466b88ffc382853964d41b371223656a5ab5ab6c1f1919282fb37,PodSandboxId:a099667d573685ef6796fd7b5fc29b29743d15377041905911a7c8a42d9f3018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718019839282722811,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b179713a28fd80c0cc32c3b0caf57c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1d26206c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75,PodSandboxId:163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718019744497005217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5e1485b94ff9518ef578d91c769ba1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953,PodSandboxId:cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718019744440814249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f78c5c54990e96ad18b39482c096da,},Annotations:map[string]string{io.kubernetes.container.hash: a82063cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a,PodSandboxId:55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718019744456140850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b179713a28fd80c0cc32c3b0caf57c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1d26206c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b,PodSandboxId:3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718019729768096921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aafbd4ab61f8e53adaa6142da976f4ea,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=816b2ad9-8c42-40af-b066-f842b5ebec46 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.734105170Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e3f1e6e-3f8b-4059-afe8-bc1f51594783 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.734181577Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e3f1e6e-3f8b-4059-afe8-bc1f51594783 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.735036796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58d2acf4-629c-4e91-99dc-90596bfe5f8b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.735388380Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718019845735366243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58d2acf4-629c-4e91-99dc-90596bfe5f8b name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.735834337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=936c23cb-8ac3-4136-95f6-5fd5c25335f9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.735884282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=936c23cb-8ac3-4136-95f6-5fd5c25335f9 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:44:05 kubernetes-upgrade-685160 crio[1949]: time="2024-06-10 11:44:05.736038298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bce3b57fbce89be63765fdeb2736cc742a85ba9274532189e63b3b3a908be9af,PodSandboxId:ff74750497d5d997ecfd4086fd7fa63321b8034de3a0d7f448dd21360e6b7eab,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718019839381598831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f78c5c54990e96ad18b39482c096da,},Annotations:map[string]string{io.kubernetes.container.hash: a82063cc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a164acb87c5e0db01c859f5099b012674acb611c7f7f78114dd27e004c969be,PodSandboxId:14efdf8130d4585c6d66dd50718e3a424b2d799b75d154e9bff2699cc82860df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718019839345348522,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5e1485b94ff9518ef578d91c769ba1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb05e37f4e7466b88ffc382853964d41b371223656a5ab5ab6c1f1919282fb37,PodSandboxId:a099667d573685ef6796fd7b5fc29b29743d15377041905911a7c8a42d9f3018,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718019839282722811,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b179713a28fd80c0cc32c3b0caf57c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1d26206c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75,PodSandboxId:163b45040b287f35d25420302ebd436b3d8600777faa50a7416710591da652ba,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1718019744497005217,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b5e1485b94ff9518ef578d91c769ba1,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953,PodSandboxId:cf78d60b5e373f6d30ea3d1a7eefe38f3ef22ef175840c20b8e6bb93f4a65dbc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1718019744440814249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15f78c5c54990e96ad18b39482c096da,},Annotations:map[string]string{io.kubernetes.container.hash: a82063cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a,PodSandboxId:55d7371542758571d161d01b65816d29b3031151e326df95756dd4c2b580bd60,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1718019744456140850,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b179713a28fd80c0cc32c3b0caf57c6,},Annotations:map[string]string{io.kubernetes.container.hash: 1d26206c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contain
er.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b,PodSandboxId:3a1f0bcd909db0fd8c7257c4cab7e4a383e06bee6639eb619dcffce603d3c9b3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1718019729768096921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-685160,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aafbd4ab61f8e53adaa6142da976f4ea,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=936c23cb-8ac3-4136-95f6-5fd5c25335f9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bce3b57fbce89       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   6 seconds ago        Running             etcd                      2                   ff74750497d5d       etcd-kubernetes-upgrade-685160
	0a164acb87c5e       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   6 seconds ago        Running             kube-scheduler            2                   14efdf8130d45       kube-scheduler-kubernetes-upgrade-685160
	cb05e37f4e746       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   6 seconds ago        Running             kube-apiserver            2                   a099667d57368       kube-apiserver-kubernetes-upgrade-685160
	be89b9a2fd619       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   About a minute ago   Exited              kube-scheduler            1                   163b45040b287       kube-scheduler-kubernetes-upgrade-685160
	453ab28adb5bd       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   About a minute ago   Exited              kube-apiserver            1                   55d7371542758       kube-apiserver-kubernetes-upgrade-685160
	1ad5fbe828b6f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      1                   cf78d60b5e373       etcd-kubernetes-upgrade-685160
	01dacdebd65b0       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   About a minute ago   Exited              kube-controller-manager   0                   3a1f0bcd909db       kube-controller-manager-kubernetes-upgrade-685160
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-685160
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-685160
	                    kubernetes.io/os=linux
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:42:12 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-685160
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 11:44:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:44:02 +0000   Mon, 10 Jun 2024 11:42:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:44:02 +0000   Mon, 10 Jun 2024 11:42:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:44:02 +0000   Mon, 10 Jun 2024 11:42:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:44:02 +0000   Mon, 10 Jun 2024 11:42:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.47
	  Hostname:    kubernetes-upgrade-685160
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 83939a5e91954f9d9abebe8929619f99
	  System UUID:                83939a5e-9195-4f9d-9abe-be8929619f99
	  Boot ID:                    9145dc10-e304-4fcf-8268-317e1ac22d97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-685160                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kube-apiserver-kubernetes-upgrade-685160             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-685160    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-scheduler-kubernetes-upgrade-685160             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (4%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From     Message
	  ----    ------                   ----                 ----     -------
	  Normal  NodeAllocatableEnforced  118s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  116s (x8 over 119s)  kubelet  Node kubernetes-upgrade-685160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 119s)  kubelet  Node kubernetes-upgrade-685160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 119s)  kubelet  Node kubernetes-upgrade-685160 status is now: NodeHasSufficientPID
	  Normal  Starting                 7s                   kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)      kubelet  Node kubernetes-upgrade-685160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)      kubelet  Node kubernetes-upgrade-685160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x7 over 7s)      kubelet  Node kubernetes-upgrade-685160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7s                   kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +1.857449] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.584486] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.543856] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.056625] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057019] systemd-fstab-generator[577]: Ignoring "noauto" option for root device
	[  +0.214186] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.121773] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.266361] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[Jun10 11:42] systemd-fstab-generator[730]: Ignoring "noauto" option for root device
	[  +1.947501] systemd-fstab-generator[850]: Ignoring "noauto" option for root device
	[  +0.056791] kauditd_printk_skb: 158 callbacks suppressed
	[ +15.406699] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	[  +0.087888] kauditd_printk_skb: 69 callbacks suppressed
	[  +3.384161] systemd-fstab-generator[1786]: Ignoring "noauto" option for root device
	[  +0.174210] systemd-fstab-generator[1800]: Ignoring "noauto" option for root device
	[  +0.181359] systemd-fstab-generator[1817]: Ignoring "noauto" option for root device
	[  +0.147633] systemd-fstab-generator[1829]: Ignoring "noauto" option for root device
	[  +0.269359] systemd-fstab-generator[1857]: Ignoring "noauto" option for root device
	[Jun10 11:43] systemd-fstab-generator[2034]: Ignoring "noauto" option for root device
	[  +0.066510] kauditd_printk_skb: 172 callbacks suppressed
	[  +2.095222] systemd-fstab-generator[2158]: Ignoring "noauto" option for root device
	[Jun10 11:44] systemd-fstab-generator[2534]: Ignoring "noauto" option for root device
	[  +0.086669] kauditd_printk_skb: 75 callbacks suppressed
	
	
	==> etcd [1ad5fbe828b6fad7d925ac287dbd514550ee52dbc67d85f8ef0e218bdee35953] <==
	{"level":"info","ts":"2024-06-10T11:42:24.78571Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"7.04383ms"}
	{"level":"info","ts":"2024-06-10T11:42:24.787527Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-06-10T11:42:24.790736Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"a66a701203d69b1d","local-member-id":"63d12f7d015473f3","commit-index":297}
	{"level":"info","ts":"2024-06-10T11:42:24.793082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 switched to configuration voters=()"}
	{"level":"info","ts":"2024-06-10T11:42:24.793838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became follower at term 2"}
	{"level":"info","ts":"2024-06-10T11:42:24.793892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 63d12f7d015473f3 [peers: [], term: 2, commit: 297, applied: 0, lastindex: 297, lastterm: 2]"}
	{"level":"warn","ts":"2024-06-10T11:42:24.803906Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-06-10T11:42:24.812941Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":290}
	{"level":"info","ts":"2024-06-10T11:42:24.82591Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-06-10T11:42:24.831223Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"63d12f7d015473f3","timeout":"7s"}
	{"level":"info","ts":"2024-06-10T11:42:24.831518Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"63d12f7d015473f3"}
	{"level":"info","ts":"2024-06-10T11:42:24.831612Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"63d12f7d015473f3","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-06-10T11:42:24.835244Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-06-10T11:42:24.836948Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T11:42:24.83708Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T11:42:24.837111Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T11:42:24.837372Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 switched to configuration voters=(7192582293827122163)"}
	{"level":"info","ts":"2024-06-10T11:42:24.83748Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a66a701203d69b1d","local-member-id":"63d12f7d015473f3","added-peer-id":"63d12f7d015473f3","added-peer-peer-urls":["https://192.168.50.47:2380"]}
	{"level":"info","ts":"2024-06-10T11:42:24.83766Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a66a701203d69b1d","local-member-id":"63d12f7d015473f3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:42:24.839641Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:42:24.847256Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-10T11:42:24.847581Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"63d12f7d015473f3","initial-advertise-peer-urls":["https://192.168.50.47:2380"],"listen-peer-urls":["https://192.168.50.47:2380"],"advertise-client-urls":["https://192.168.50.47:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.47:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-10T11:42:24.847391Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.47:2380"}
	{"level":"info","ts":"2024-06-10T11:42:24.850829Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.47:2380"}
	{"level":"info","ts":"2024-06-10T11:42:24.848659Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [bce3b57fbce89be63765fdeb2736cc742a85ba9274532189e63b3b3a908be9af] <==
	{"level":"info","ts":"2024-06-10T11:43:59.672805Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T11:43:59.672881Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-06-10T11:43:59.673132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 switched to configuration voters=(7192582293827122163)"}
	{"level":"info","ts":"2024-06-10T11:43:59.674912Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a66a701203d69b1d","local-member-id":"63d12f7d015473f3","added-peer-id":"63d12f7d015473f3","added-peer-peer-urls":["https://192.168.50.47:2380"]}
	{"level":"info","ts":"2024-06-10T11:43:59.675161Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a66a701203d69b1d","local-member-id":"63d12f7d015473f3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:43:59.675215Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:43:59.679005Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-06-10T11:43:59.679232Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"63d12f7d015473f3","initial-advertise-peer-urls":["https://192.168.50.47:2380"],"listen-peer-urls":["https://192.168.50.47:2380"],"advertise-client-urls":["https://192.168.50.47:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.47:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-06-10T11:43:59.679265Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-06-10T11:43:59.679297Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.47:2380"}
	{"level":"info","ts":"2024-06-10T11:43:59.679315Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.47:2380"}
	{"level":"info","ts":"2024-06-10T11:44:00.943117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-10T11:44:00.94318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-10T11:44:00.943231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 received MsgPreVoteResp from 63d12f7d015473f3 at term 2"}
	{"level":"info","ts":"2024-06-10T11:44:00.943245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became candidate at term 3"}
	{"level":"info","ts":"2024-06-10T11:44:00.94325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 received MsgVoteResp from 63d12f7d015473f3 at term 3"}
	{"level":"info","ts":"2024-06-10T11:44:00.943261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"63d12f7d015473f3 became leader at term 3"}
	{"level":"info","ts":"2024-06-10T11:44:00.943268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 63d12f7d015473f3 elected leader 63d12f7d015473f3 at term 3"}
	{"level":"info","ts":"2024-06-10T11:44:00.950056Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"63d12f7d015473f3","local-member-attributes":"{Name:kubernetes-upgrade-685160 ClientURLs:[https://192.168.50.47:2379]}","request-path":"/0/members/63d12f7d015473f3/attributes","cluster-id":"a66a701203d69b1d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T11:44:00.950229Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:44:00.950405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:44:00.950885Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T11:44:00.950909Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T11:44:00.952311Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T11:44:00.952356Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.47:2379"}
	
	
	==> kernel <==
	 11:44:06 up 2 min,  0 users,  load average: 0.79, 0.40, 0.16
	Linux kubernetes-upgrade-685160 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [453ab28adb5bd4ed491b8761c188fda0d07c0e9c431e705fc1b8d56a3da1a43a] <==
	I0610 11:42:24.756734       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:42:25.167038       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0610 11:42:25.170294       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 11:42:25.173274       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0610 11:42:25.173392       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0610 11:42:25.173542       1 instance.go:299] Using reconciler: lease
	W0610 11:42:25.690948       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:41998->127.0.0.1:2379: read: connection reset by peer"
	W0610 11:42:25.691076       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:41978->127.0.0.1:2379: read: connection reset by peer"
	W0610 11:42:25.691206       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:41986->127.0.0.1:2379: read: connection reset by peer"
	W0610 11:42:26.692272       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:26.692483       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:26.692657       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:28.044287       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:28.192211       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:28.310736       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:30.594072       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:30.660067       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:30.842442       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:34.278274       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:34.489148       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:35.026700       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:40.587211       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:42.218863       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0610 11:42:42.879879       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0610 11:42:45.174817       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [cb05e37f4e7466b88ffc382853964d41b371223656a5ab5ab6c1f1919282fb37] <==
	I0610 11:44:02.157643       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 11:44:02.157927       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 11:44:02.255087       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0610 11:44:02.255117       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0610 11:44:02.261250       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0610 11:44:02.261358       1 policy_source.go:224] refreshing policies
	I0610 11:44:02.261488       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0610 11:44:02.277139       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0610 11:44:02.278376       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0610 11:44:02.280214       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 11:44:02.282679       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 11:44:02.283389       1 aggregator.go:165] initial CRD sync complete...
	I0610 11:44:02.283435       1 autoregister_controller.go:141] Starting autoregister controller
	I0610 11:44:02.283459       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0610 11:44:02.283483       1 cache.go:39] Caches are synced for autoregister controller
	I0610 11:44:02.286987       1 shared_informer.go:320] Caches are synced for configmaps
	I0610 11:44:02.295287       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0610 11:44:02.319807       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0610 11:44:02.328999       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 11:44:03.160490       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 11:44:03.869443       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0610 11:44:03.891290       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0610 11:44:03.921938       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0610 11:44:04.052573       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 11:44:04.059237       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b] <==
	E0610 11:42:15.383658       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0610 11:42:15.383688       1 controllermanager.go:739] "Warning: skipping controller" controller="service-lb-controller"
	I0610 11:42:15.534242       1 controllermanager.go:761] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0610 11:42:15.534412       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0610 11:42:15.534534       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0610 11:42:15.683058       1 controllermanager.go:761] "Started controller" controller="endpoints-controller"
	I0610 11:42:15.683192       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0610 11:42:15.683216       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0610 11:42:15.931655       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0610 11:42:15.931796       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0610 11:42:15.931844       1 controllermanager.go:761] "Started controller" controller="garbage-collector-controller"
	I0610 11:42:15.931890       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0610 11:42:16.188404       1 controllermanager.go:761] "Started controller" controller="ttl-controller"
	I0610 11:42:16.188936       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0610 11:42:16.190933       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0610 11:42:16.334613       1 controllermanager.go:761] "Started controller" controller="persistentvolume-expander-controller"
	I0610 11:42:16.334849       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0610 11:42:16.334894       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0610 11:42:16.381854       1 controllermanager.go:761] "Started controller" controller="taint-eviction-controller"
	I0610 11:42:16.381961       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0610 11:42:16.382170       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0610 11:42:16.382224       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0610 11:42:16.636695       1 controllermanager.go:761] "Started controller" controller="namespace-controller"
	I0610 11:42:16.636819       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0610 11:42:16.636831       1 shared_informer.go:313] Waiting for caches to sync for namespace
	
	
	==> kube-scheduler [0a164acb87c5e0db01c859f5099b012674acb611c7f7f78114dd27e004c969be] <==
	I0610 11:44:00.242738       1 serving.go:380] Generated self-signed cert in-memory
	W0610 11:44:02.205724       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 11:44:02.205800       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 11:44:02.205812       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 11:44:02.205817       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 11:44:02.243352       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 11:44:02.243393       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:44:02.249027       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 11:44:02.249183       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 11:44:02.249205       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 11:44:02.249222       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 11:44:02.349269       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [be89b9a2fd619b574067e867d301fe836b3b1e341b3ccbf8bcd1d4e321eb8d75] <==
	I0610 11:42:25.448402       1 serving.go:380] Generated self-signed cert in-memory
	W0610 11:42:35.844989       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.50.47:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0610 11:42:35.845025       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 11:42:35.845031       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 11:42:46.185148       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 11:42:46.185187       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:42:46.188850       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 11:42:46.188895       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0610 11:42:46.188908       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 11:42:46.188915       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 11:42:46.188969       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0610 11:42:46.189116       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0610 11:42:46.189205       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jun 10 11:43:59 kubernetes-upgrade-685160 kubelet[2165]: I0610 11:43:59.377049    2165 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-685160"
	Jun 10 11:43:59 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:43:59.378312    2165 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.47:8443: connect: connection refused" node="kubernetes-upgrade-685160"
	Jun 10 11:43:59 kubernetes-upgrade-685160 kubelet[2165]: I0610 11:43:59.763827    2165 scope.go:117] "RemoveContainer" containerID="01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b"
	Jun 10 11:43:59 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:43:59.773231    2165 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1\" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="71de19821c79b2016546345743ce8ea035fd59e9c530f17c33e782b42b864d1e"
	Jun 10 11:43:59 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:43:59.773392    2165 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.1,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-acco
unt-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},L
ivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-kubernetes-upgrade-685160
_kube-system(aafbd4ab61f8e53adaa6142da976f4ea): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use
	Jun 10 11:43:59 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:43:59.773444    2165 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1\\\" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-685160" podUID="aafbd4ab61f8e53adaa6142da976f4ea"
	Jun 10 11:44:00 kubernetes-upgrade-685160 kubelet[2165]: I0610 11:44:00.180582    2165 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-685160"
	Jun 10 11:44:00 kubernetes-upgrade-685160 kubelet[2165]: I0610 11:44:00.783170    2165 scope.go:117] "RemoveContainer" containerID="01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b"
	Jun 10 11:44:00 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:44:00.790104    2165 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1\" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="71de19821c79b2016546345743ce8ea035fd59e9c530f17c33e782b42b864d1e"
	Jun 10 11:44:00 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:44:00.790231    2165 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.1,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-acco
unt-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},L
ivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-kubernetes-upgrade-685160
_kube-system(aafbd4ab61f8e53adaa6142da976f4ea): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use
	Jun 10 11:44:00 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:44:00.790260    2165 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1\\\" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-685160" podUID="aafbd4ab61f8e53adaa6142da976f4ea"
	Jun 10 11:44:01 kubernetes-upgrade-685160 kubelet[2165]: I0610 11:44:01.783222    2165 scope.go:117] "RemoveContainer" containerID="01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b"
	Jun 10 11:44:01 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:44:01.796351    2165 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1\" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="71de19821c79b2016546345743ce8ea035fd59e9c530f17c33e782b42b864d1e"
	Jun 10 11:44:01 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:44:01.796546    2165 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.1,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-acco
unt-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},L
ivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-kubernetes-upgrade-685160
_kube-system(aafbd4ab61f8e53adaa6142da976f4ea): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use
	Jun 10 11:44:01 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:44:01.796642    2165 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1\\\" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-685160" podUID="aafbd4ab61f8e53adaa6142da976f4ea"
	Jun 10 11:44:02 kubernetes-upgrade-685160 kubelet[2165]: I0610 11:44:02.319812    2165 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-685160"
	Jun 10 11:44:02 kubernetes-upgrade-685160 kubelet[2165]: I0610 11:44:02.320169    2165 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-685160"
	Jun 10 11:44:02 kubernetes-upgrade-685160 kubelet[2165]: I0610 11:44:02.665827    2165 apiserver.go:52] "Watching apiserver"
	Jun 10 11:44:02 kubernetes-upgrade-685160 kubelet[2165]: I0610 11:44:02.670260    2165 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jun 10 11:44:02 kubernetes-upgrade-685160 kubelet[2165]: I0610 11:44:02.784151    2165 scope.go:117] "RemoveContainer" containerID="01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b"
	Jun 10 11:44:02 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:44:02.798035    2165 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1\" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="71de19821c79b2016546345743ce8ea035fd59e9c530f17c33e782b42b864d1e"
	Jun 10 11:44:02 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:44:02.798412    2165 kuberuntime_manager.go:1256] container &Container{Name:kube-controller-manager,Image:registry.k8s.io/kube-controller-manager:v1.30.1,Command:[kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-cidr=10.244.0.0/16 --cluster-name=mk --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt --cluster-signing-key-file=/var/lib/minikube/certs/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=false --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt --root-ca-file=/var/lib/minikube/certs/ca.crt --service-account-private-key-file=/var/lib/minikube/certs/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-acco
unt-credentials=true],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{200 -3} {<nil>} 200m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/ssl/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:flexvolume-dir,ReadOnly:false,MountPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:k8s-certs,ReadOnly:true,MountPath:/var/lib/minikube/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/controller-manager.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:usr-share-ca-certificates,ReadOnly:true,MountPath:/usr/share/ca-certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},L
ivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10257 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-kubernetes-upgrade-685160
_kube-system(aafbd4ab61f8e53adaa6142da976f4ea): CreateContainerError: the container name "k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use
	Jun 10 11:44:02 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:44:02.798558    2165 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-685160_kube-system_aafbd4ab61f8e53adaa6142da976f4ea_1\\\" is already in use by cf8f53c90e9e07c5bfb557370d1811e4326a60ecc10a195918965efc3bb041cf. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-685160" podUID="aafbd4ab61f8e53adaa6142da976f4ea"
	Jun 10 11:44:03 kubernetes-upgrade-685160 kubelet[2165]: I0610 11:44:03.786960    2165 scope.go:117] "RemoveContainer" containerID="01dacdebd65b0eca4e557f597ec68eec14daf6add2fdae85f8fa07898049106b"
	Jun 10 11:44:03 kubernetes-upgrade-685160 kubelet[2165]: E0610 11:44:03.788314    2165 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubernetes-upgrade-685160_kube-system(aafbd4ab61f8e53adaa6142da976f4ea)\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-685160" podUID="aafbd4ab61f8e53adaa6142da976f4ea"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-685160 -n kubernetes-upgrade-685160
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-685160 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-685160 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-685160 describe pod storage-provisioner: exit status 1 (60.286035ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-685160 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-685160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-685160
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-685160: (1.093982944s)
--- FAIL: TestKubernetesUpgrade (441.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (265.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-166693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-166693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m25.187075016s)

                                                
                                                
-- stdout --
	* [old-k8s-version-166693] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-166693" primary control-plane node in "old-k8s-version-166693" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 11:38:57.017319   54458 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:38:57.017439   54458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:38:57.017451   54458 out.go:304] Setting ErrFile to fd 2...
	I0610 11:38:57.017455   54458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:38:57.017693   54458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:38:57.018395   54458 out.go:298] Setting JSON to false
	I0610 11:38:57.019424   54458 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4878,"bootTime":1718014659,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:38:57.019487   54458 start.go:139] virtualization: kvm guest
	I0610 11:38:57.022013   54458 out.go:177] * [old-k8s-version-166693] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:38:57.023907   54458 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:38:57.023881   54458 notify.go:220] Checking for updates...
	I0610 11:38:57.025394   54458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:38:57.026939   54458 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:38:57.028383   54458 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:38:57.029727   54458 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:38:57.031056   54458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:38:57.032992   54458 config.go:182] Loaded profile config "cert-expiration-324836": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:38:57.033140   54458 config.go:182] Loaded profile config "kubernetes-upgrade-685160": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0610 11:38:57.033287   54458 config.go:182] Loaded profile config "stopped-upgrade-161665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0610 11:38:57.033416   54458 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:38:57.080063   54458 out.go:177] * Using the kvm2 driver based on user configuration
	I0610 11:38:57.081676   54458 start.go:297] selected driver: kvm2
	I0610 11:38:57.081702   54458 start.go:901] validating driver "kvm2" against <nil>
	I0610 11:38:57.081718   54458 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:38:57.082838   54458 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:38:57.082984   54458 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 11:38:57.107481   54458 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 11:38:57.107541   54458 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 11:38:57.107766   54458 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:38:57.107790   54458 cni.go:84] Creating CNI manager for ""
	I0610 11:38:57.107803   54458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:38:57.107811   54458 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 11:38:57.107866   54458 start.go:340] cluster config:
	{Name:old-k8s-version-166693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-166693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:38:57.108002   54458 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:38:57.109821   54458 out.go:177] * Starting "old-k8s-version-166693" primary control-plane node in "old-k8s-version-166693" cluster
	I0610 11:38:57.111261   54458 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0610 11:38:57.111302   54458 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0610 11:38:57.111314   54458 cache.go:56] Caching tarball of preloaded images
	I0610 11:38:57.111407   54458 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:38:57.111422   54458 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0610 11:38:57.111557   54458 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/config.json ...
	I0610 11:38:57.111580   54458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/config.json: {Name:mka9f7511c21135f3d449d39c92f524984407e64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:38:57.111754   54458 start.go:360] acquireMachinesLock for old-k8s-version-166693: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:38:57.111793   54458 start.go:364] duration metric: took 18.524µs to acquireMachinesLock for "old-k8s-version-166693"
	I0610 11:38:57.111812   54458 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-166693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-166693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 11:38:57.111889   54458 start.go:125] createHost starting for "" (driver="kvm2")
	I0610 11:38:57.113446   54458 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 11:38:57.113608   54458 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:38:57.113660   54458 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:38:57.128366   54458 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33779
	I0610 11:38:57.128888   54458 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:38:57.129569   54458 main.go:141] libmachine: Using API Version  1
	I0610 11:38:57.129590   54458 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:38:57.129962   54458 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:38:57.130161   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetMachineName
	I0610 11:38:57.130295   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:38:57.130429   54458 start.go:159] libmachine.API.Create for "old-k8s-version-166693" (driver="kvm2")
	I0610 11:38:57.130452   54458 client.go:168] LocalClient.Create starting
	I0610 11:38:57.130486   54458 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 11:38:57.130522   54458 main.go:141] libmachine: Decoding PEM data...
	I0610 11:38:57.130537   54458 main.go:141] libmachine: Parsing certificate...
	I0610 11:38:57.130583   54458 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 11:38:57.130601   54458 main.go:141] libmachine: Decoding PEM data...
	I0610 11:38:57.130612   54458 main.go:141] libmachine: Parsing certificate...
	I0610 11:38:57.130626   54458 main.go:141] libmachine: Running pre-create checks...
	I0610 11:38:57.130635   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .PreCreateCheck
	I0610 11:38:57.131011   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetConfigRaw
	I0610 11:38:57.131381   54458 main.go:141] libmachine: Creating machine...
	I0610 11:38:57.131394   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .Create
	I0610 11:38:57.131543   54458 main.go:141] libmachine: (old-k8s-version-166693) Creating KVM machine...
	I0610 11:38:57.132913   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found existing default KVM network
	I0610 11:38:57.134537   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:57.134369   54478 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4c:d0:b9} reservation:<nil>}
	I0610 11:38:57.135738   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:57.135646   54478 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:93:50:cc} reservation:<nil>}
	I0610 11:38:57.137082   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:57.136938   54478 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:3e:67:dd} reservation:<nil>}
	I0610 11:38:57.138541   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:57.138474   54478 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003091b0}
	I0610 11:38:57.138611   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | created network xml: 
	I0610 11:38:57.138630   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | <network>
	I0610 11:38:57.138642   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG |   <name>mk-old-k8s-version-166693</name>
	I0610 11:38:57.138650   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG |   <dns enable='no'/>
	I0610 11:38:57.138658   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG |   
	I0610 11:38:57.138668   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0610 11:38:57.138678   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG |     <dhcp>
	I0610 11:38:57.138687   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0610 11:38:57.138696   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG |     </dhcp>
	I0610 11:38:57.138703   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG |   </ip>
	I0610 11:38:57.138720   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG |   
	I0610 11:38:57.138731   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | </network>
	I0610 11:38:57.138759   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | 
	I0610 11:38:57.145013   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | trying to create private KVM network mk-old-k8s-version-166693 192.168.72.0/24...
	I0610 11:38:57.232805   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | private KVM network mk-old-k8s-version-166693 192.168.72.0/24 created
	I0610 11:38:57.232859   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:57.232756   54478 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:38:57.232888   54458 main.go:141] libmachine: (old-k8s-version-166693) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693 ...
	I0610 11:38:57.232901   54458 main.go:141] libmachine: (old-k8s-version-166693) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 11:38:57.232929   54458 main.go:141] libmachine: (old-k8s-version-166693) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 11:38:57.461731   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:57.461576   54478 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa...
	I0610 11:38:57.600466   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:57.600305   54478 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/old-k8s-version-166693.rawdisk...
	I0610 11:38:57.600565   54458 main.go:141] libmachine: (old-k8s-version-166693) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693 (perms=drwx------)
	I0610 11:38:57.600580   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Writing magic tar header
	I0610 11:38:57.600596   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Writing SSH key tar header
	I0610 11:38:57.600625   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:57.600418   54478 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693 ...
	I0610 11:38:57.600658   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693
	I0610 11:38:57.600678   54458 main.go:141] libmachine: (old-k8s-version-166693) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 11:38:57.600761   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 11:38:57.600800   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:38:57.600852   54458 main.go:141] libmachine: (old-k8s-version-166693) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 11:38:57.600867   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 11:38:57.600882   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 11:38:57.600902   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Checking permissions on dir: /home/jenkins
	I0610 11:38:57.600913   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Checking permissions on dir: /home
	I0610 11:38:57.600919   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Skipping /home - not owner
	I0610 11:38:57.600958   54458 main.go:141] libmachine: (old-k8s-version-166693) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 11:38:57.600985   54458 main.go:141] libmachine: (old-k8s-version-166693) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 11:38:57.601002   54458 main.go:141] libmachine: (old-k8s-version-166693) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 11:38:57.601015   54458 main.go:141] libmachine: (old-k8s-version-166693) Creating domain...
	I0610 11:38:57.602160   54458 main.go:141] libmachine: (old-k8s-version-166693) define libvirt domain using xml: 
	I0610 11:38:57.602205   54458 main.go:141] libmachine: (old-k8s-version-166693) <domain type='kvm'>
	I0610 11:38:57.602220   54458 main.go:141] libmachine: (old-k8s-version-166693)   <name>old-k8s-version-166693</name>
	I0610 11:38:57.602230   54458 main.go:141] libmachine: (old-k8s-version-166693)   <memory unit='MiB'>2200</memory>
	I0610 11:38:57.602239   54458 main.go:141] libmachine: (old-k8s-version-166693)   <vcpu>2</vcpu>
	I0610 11:38:57.602259   54458 main.go:141] libmachine: (old-k8s-version-166693)   <features>
	I0610 11:38:57.602288   54458 main.go:141] libmachine: (old-k8s-version-166693)     <acpi/>
	I0610 11:38:57.602305   54458 main.go:141] libmachine: (old-k8s-version-166693)     <apic/>
	I0610 11:38:57.602314   54458 main.go:141] libmachine: (old-k8s-version-166693)     <pae/>
	I0610 11:38:57.602331   54458 main.go:141] libmachine: (old-k8s-version-166693)     
	I0610 11:38:57.602375   54458 main.go:141] libmachine: (old-k8s-version-166693)   </features>
	I0610 11:38:57.602409   54458 main.go:141] libmachine: (old-k8s-version-166693)   <cpu mode='host-passthrough'>
	I0610 11:38:57.602431   54458 main.go:141] libmachine: (old-k8s-version-166693)   
	I0610 11:38:57.602447   54458 main.go:141] libmachine: (old-k8s-version-166693)   </cpu>
	I0610 11:38:57.602465   54458 main.go:141] libmachine: (old-k8s-version-166693)   <os>
	I0610 11:38:57.602480   54458 main.go:141] libmachine: (old-k8s-version-166693)     <type>hvm</type>
	I0610 11:38:57.602497   54458 main.go:141] libmachine: (old-k8s-version-166693)     <boot dev='cdrom'/>
	I0610 11:38:57.602515   54458 main.go:141] libmachine: (old-k8s-version-166693)     <boot dev='hd'/>
	I0610 11:38:57.602540   54458 main.go:141] libmachine: (old-k8s-version-166693)     <bootmenu enable='no'/>
	I0610 11:38:57.602558   54458 main.go:141] libmachine: (old-k8s-version-166693)   </os>
	I0610 11:38:57.602582   54458 main.go:141] libmachine: (old-k8s-version-166693)   <devices>
	I0610 11:38:57.602618   54458 main.go:141] libmachine: (old-k8s-version-166693)     <disk type='file' device='cdrom'>
	I0610 11:38:57.602643   54458 main.go:141] libmachine: (old-k8s-version-166693)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/boot2docker.iso'/>
	I0610 11:38:57.602669   54458 main.go:141] libmachine: (old-k8s-version-166693)       <target dev='hdc' bus='scsi'/>
	I0610 11:38:57.602680   54458 main.go:141] libmachine: (old-k8s-version-166693)       <readonly/>
	I0610 11:38:57.602688   54458 main.go:141] libmachine: (old-k8s-version-166693)     </disk>
	I0610 11:38:57.602699   54458 main.go:141] libmachine: (old-k8s-version-166693)     <disk type='file' device='disk'>
	I0610 11:38:57.602711   54458 main.go:141] libmachine: (old-k8s-version-166693)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 11:38:57.602724   54458 main.go:141] libmachine: (old-k8s-version-166693)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/old-k8s-version-166693.rawdisk'/>
	I0610 11:38:57.602758   54458 main.go:141] libmachine: (old-k8s-version-166693)       <target dev='hda' bus='virtio'/>
	I0610 11:38:57.602776   54458 main.go:141] libmachine: (old-k8s-version-166693)     </disk>
	I0610 11:38:57.602795   54458 main.go:141] libmachine: (old-k8s-version-166693)     <interface type='network'>
	I0610 11:38:57.602803   54458 main.go:141] libmachine: (old-k8s-version-166693)       <source network='mk-old-k8s-version-166693'/>
	I0610 11:38:57.602813   54458 main.go:141] libmachine: (old-k8s-version-166693)       <model type='virtio'/>
	I0610 11:38:57.602820   54458 main.go:141] libmachine: (old-k8s-version-166693)     </interface>
	I0610 11:38:57.602830   54458 main.go:141] libmachine: (old-k8s-version-166693)     <interface type='network'>
	I0610 11:38:57.602837   54458 main.go:141] libmachine: (old-k8s-version-166693)       <source network='default'/>
	I0610 11:38:57.602846   54458 main.go:141] libmachine: (old-k8s-version-166693)       <model type='virtio'/>
	I0610 11:38:57.602854   54458 main.go:141] libmachine: (old-k8s-version-166693)     </interface>
	I0610 11:38:57.602863   54458 main.go:141] libmachine: (old-k8s-version-166693)     <serial type='pty'>
	I0610 11:38:57.602870   54458 main.go:141] libmachine: (old-k8s-version-166693)       <target port='0'/>
	I0610 11:38:57.602877   54458 main.go:141] libmachine: (old-k8s-version-166693)     </serial>
	I0610 11:38:57.602885   54458 main.go:141] libmachine: (old-k8s-version-166693)     <console type='pty'>
	I0610 11:38:57.602893   54458 main.go:141] libmachine: (old-k8s-version-166693)       <target type='serial' port='0'/>
	I0610 11:38:57.602901   54458 main.go:141] libmachine: (old-k8s-version-166693)     </console>
	I0610 11:38:57.602908   54458 main.go:141] libmachine: (old-k8s-version-166693)     <rng model='virtio'>
	I0610 11:38:57.602918   54458 main.go:141] libmachine: (old-k8s-version-166693)       <backend model='random'>/dev/random</backend>
	I0610 11:38:57.602924   54458 main.go:141] libmachine: (old-k8s-version-166693)     </rng>
	I0610 11:38:57.602932   54458 main.go:141] libmachine: (old-k8s-version-166693)     
	I0610 11:38:57.602938   54458 main.go:141] libmachine: (old-k8s-version-166693)     
	I0610 11:38:57.602945   54458 main.go:141] libmachine: (old-k8s-version-166693)   </devices>
	I0610 11:38:57.602951   54458 main.go:141] libmachine: (old-k8s-version-166693) </domain>
	I0610 11:38:57.602960   54458 main.go:141] libmachine: (old-k8s-version-166693) 
	I0610 11:38:57.606935   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:0a:96:eb in network default
	I0610 11:38:57.607693   54458 main.go:141] libmachine: (old-k8s-version-166693) Ensuring networks are active...
	I0610 11:38:57.607721   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:38:57.608673   54458 main.go:141] libmachine: (old-k8s-version-166693) Ensuring network default is active
	I0610 11:38:57.609252   54458 main.go:141] libmachine: (old-k8s-version-166693) Ensuring network mk-old-k8s-version-166693 is active
	I0610 11:38:57.609868   54458 main.go:141] libmachine: (old-k8s-version-166693) Getting domain xml...
	I0610 11:38:57.610774   54458 main.go:141] libmachine: (old-k8s-version-166693) Creating domain...
	I0610 11:38:58.954669   54458 main.go:141] libmachine: (old-k8s-version-166693) Waiting to get IP...
	I0610 11:38:58.955670   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:38:58.956203   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:38:58.956227   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:58.956146   54478 retry.go:31] will retry after 254.646578ms: waiting for machine to come up
	I0610 11:38:59.213490   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:38:59.213583   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:38:59.213615   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:59.213213   54478 retry.go:31] will retry after 375.560944ms: waiting for machine to come up
	I0610 11:38:59.590327   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:38:59.591039   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:38:59.591095   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:59.590952   54478 retry.go:31] will retry after 405.142842ms: waiting for machine to come up
	I0610 11:38:59.997445   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:38:59.998060   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:38:59.998091   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:38:59.998005   54478 retry.go:31] will retry after 503.729327ms: waiting for machine to come up
	I0610 11:39:00.503627   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:00.504210   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:39:00.504240   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:39:00.504169   54478 retry.go:31] will retry after 759.003753ms: waiting for machine to come up
	I0610 11:39:01.265481   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:01.265978   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:39:01.266025   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:39:01.265904   54478 retry.go:31] will retry after 832.773921ms: waiting for machine to come up
	I0610 11:39:02.100061   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:02.100540   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:39:02.100569   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:39:02.100489   54478 retry.go:31] will retry after 912.47266ms: waiting for machine to come up
	I0610 11:39:03.015175   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:03.015712   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:39:03.015744   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:39:03.015658   54478 retry.go:31] will retry after 1.298082273s: waiting for machine to come up
	I0610 11:39:04.315020   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:04.315665   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:39:04.315690   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:39:04.315612   54478 retry.go:31] will retry after 1.157323325s: waiting for machine to come up
	I0610 11:39:05.474752   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:05.475373   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:39:05.475400   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:39:05.475316   54478 retry.go:31] will retry after 2.136356657s: waiting for machine to come up
	I0610 11:39:07.709149   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:07.709650   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:39:07.709675   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:39:07.709596   54478 retry.go:31] will retry after 1.888747417s: waiting for machine to come up
	I0610 11:39:09.600720   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:09.601192   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:39:09.601229   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:39:09.601159   54478 retry.go:31] will retry after 3.602278077s: waiting for machine to come up
	I0610 11:39:13.205366   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:13.206094   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:39:13.206119   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:39:13.206056   54478 retry.go:31] will retry after 3.634741011s: waiting for machine to come up
	I0610 11:39:16.845110   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:16.845606   54458 main.go:141] libmachine: (old-k8s-version-166693) Found IP for machine: 192.168.72.34
	I0610 11:39:16.845638   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has current primary IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:16.845648   54458 main.go:141] libmachine: (old-k8s-version-166693) Reserving static IP address...
	I0610 11:39:16.845990   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-166693", mac: "52:54:00:43:ea:f9", ip: "192.168.72.34"} in network mk-old-k8s-version-166693
	I0610 11:39:16.922622   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Getting to WaitForSSH function...
	I0610 11:39:16.922652   54458 main.go:141] libmachine: (old-k8s-version-166693) Reserved static IP address: 192.168.72.34
	I0610 11:39:16.922667   54458 main.go:141] libmachine: (old-k8s-version-166693) Waiting for SSH to be available...
	I0610 11:39:16.925550   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:16.926117   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:minikube Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:16.926149   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:16.926361   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Using SSH client type: external
	I0610 11:39:16.926391   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa (-rw-------)
	I0610 11:39:16.926435   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 11:39:16.926448   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | About to run SSH command:
	I0610 11:39:16.926462   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | exit 0
	I0610 11:39:17.058068   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | SSH cmd err, output: <nil>: 
	I0610 11:39:17.058332   54458 main.go:141] libmachine: (old-k8s-version-166693) KVM machine creation complete!
	I0610 11:39:17.058681   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetConfigRaw
	I0610 11:39:17.059260   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:39:17.059447   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:39:17.059648   54458 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 11:39:17.059662   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetState
	I0610 11:39:17.061025   54458 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 11:39:17.061038   54458 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 11:39:17.061043   54458 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 11:39:17.061061   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:39:17.063658   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.064069   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:17.064090   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.064313   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:39:17.064489   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:17.064677   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:17.064846   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:39:17.065056   54458 main.go:141] libmachine: Using SSH client type: native
	I0610 11:39:17.065270   54458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0610 11:39:17.065282   54458 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 11:39:17.176926   54458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:39:17.176971   54458 main.go:141] libmachine: Detecting the provisioner...
	I0610 11:39:17.176984   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:39:17.180039   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.180432   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:17.180461   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.180688   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:39:17.180913   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:17.181101   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:17.181270   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:39:17.181476   54458 main.go:141] libmachine: Using SSH client type: native
	I0610 11:39:17.181634   54458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0610 11:39:17.181644   54458 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 11:39:17.289596   54458 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 11:39:17.289695   54458 main.go:141] libmachine: found compatible host: buildroot
	I0610 11:39:17.289709   54458 main.go:141] libmachine: Provisioning with buildroot...
	I0610 11:39:17.289721   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetMachineName
	I0610 11:39:17.289987   54458 buildroot.go:166] provisioning hostname "old-k8s-version-166693"
	I0610 11:39:17.290009   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetMachineName
	I0610 11:39:17.290180   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:39:17.292668   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.293097   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:17.293131   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.293294   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:39:17.293534   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:17.293713   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:17.293868   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:39:17.294065   54458 main.go:141] libmachine: Using SSH client type: native
	I0610 11:39:17.294248   54458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0610 11:39:17.294264   54458 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-166693 && echo "old-k8s-version-166693" | sudo tee /etc/hostname
	I0610 11:39:17.418894   54458 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-166693
	
	I0610 11:39:17.418929   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:39:17.421980   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.422359   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:17.422385   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.422585   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:39:17.422772   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:17.422907   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:17.423055   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:39:17.423223   54458 main.go:141] libmachine: Using SSH client type: native
	I0610 11:39:17.423433   54458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0610 11:39:17.423463   54458 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-166693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-166693/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-166693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:39:17.537271   54458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:39:17.537308   54458 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 11:39:17.537338   54458 buildroot.go:174] setting up certificates
	I0610 11:39:17.537350   54458 provision.go:84] configureAuth start
	I0610 11:39:17.537363   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetMachineName
	I0610 11:39:17.537665   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetIP
	I0610 11:39:17.540744   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.541140   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:17.541169   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.541320   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:39:17.543790   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.544212   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:17.544238   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.544402   54458 provision.go:143] copyHostCerts
	I0610 11:39:17.544495   54458 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 11:39:17.544507   54458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:39:17.544575   54458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 11:39:17.544805   54458 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 11:39:17.544822   54458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:39:17.544870   54458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 11:39:17.545005   54458 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 11:39:17.545023   54458 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:39:17.545055   54458 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 11:39:17.545129   54458 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-166693 san=[127.0.0.1 192.168.72.34 localhost minikube old-k8s-version-166693]
	I0610 11:39:17.711292   54458 provision.go:177] copyRemoteCerts
	I0610 11:39:17.711348   54458 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:39:17.711371   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:39:17.713997   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.714348   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:17.714396   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.714599   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:39:17.714785   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:17.715002   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:39:17.715175   54458 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa Username:docker}
	I0610 11:39:17.800523   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0610 11:39:17.825460   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 11:39:17.850466   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:39:17.875461   54458 provision.go:87] duration metric: took 338.100212ms to configureAuth
	I0610 11:39:17.875489   54458 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:39:17.875641   54458 config.go:182] Loaded profile config "old-k8s-version-166693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0610 11:39:17.875709   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:39:17.878576   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.879097   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:17.879129   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:17.879279   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:39:17.879554   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:17.879768   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:17.879946   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:39:17.880177   54458 main.go:141] libmachine: Using SSH client type: native
	I0610 11:39:17.880328   54458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0610 11:39:17.880344   54458 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 11:39:18.134844   54458 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 11:39:18.134879   54458 main.go:141] libmachine: Checking connection to Docker...
	I0610 11:39:18.134892   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetURL
	I0610 11:39:18.135986   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | Using libvirt version 6000000
	I0610 11:39:18.138297   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.138674   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:18.138703   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.138885   54458 main.go:141] libmachine: Docker is up and running!
	I0610 11:39:18.138900   54458 main.go:141] libmachine: Reticulating splines...
	I0610 11:39:18.138908   54458 client.go:171] duration metric: took 21.008447556s to LocalClient.Create
	I0610 11:39:18.138934   54458 start.go:167] duration metric: took 21.00850591s to libmachine.API.Create "old-k8s-version-166693"
	I0610 11:39:18.138946   54458 start.go:293] postStartSetup for "old-k8s-version-166693" (driver="kvm2")
	I0610 11:39:18.138961   54458 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:39:18.138986   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:39:18.139341   54458 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:39:18.139367   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:39:18.141733   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.142111   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:18.142142   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.142268   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:39:18.142430   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:18.142578   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:39:18.142700   54458 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa Username:docker}
	I0610 11:39:18.223550   54458 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:39:18.227856   54458 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:39:18.227884   54458 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 11:39:18.227975   54458 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 11:39:18.228069   54458 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 11:39:18.228181   54458 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:39:18.241150   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:39:18.267239   54458 start.go:296] duration metric: took 128.276706ms for postStartSetup
	I0610 11:39:18.267297   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetConfigRaw
	I0610 11:39:18.268118   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetIP
	I0610 11:39:18.270815   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.271204   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:18.271242   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.271489   54458 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/config.json ...
	I0610 11:39:18.271675   54458 start.go:128] duration metric: took 21.159777341s to createHost
	I0610 11:39:18.271702   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:39:18.274078   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.274519   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:18.274555   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.274691   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:39:18.274881   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:18.275034   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:18.275181   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:39:18.275358   54458 main.go:141] libmachine: Using SSH client type: native
	I0610 11:39:18.275506   54458 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0610 11:39:18.275521   54458 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 11:39:18.385394   54458 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718019558.356464123
	
	I0610 11:39:18.385424   54458 fix.go:216] guest clock: 1718019558.356464123
	I0610 11:39:18.385433   54458 fix.go:229] Guest: 2024-06-10 11:39:18.356464123 +0000 UTC Remote: 2024-06-10 11:39:18.271687647 +0000 UTC m=+21.299597860 (delta=84.776476ms)
	I0610 11:39:18.385458   54458 fix.go:200] guest clock delta is within tolerance: 84.776476ms
	I0610 11:39:18.385465   54458 start.go:83] releasing machines lock for "old-k8s-version-166693", held for 21.273663894s
	I0610 11:39:18.385496   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:39:18.385790   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetIP
	I0610 11:39:18.388852   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.389278   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:18.389309   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.389510   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:39:18.390099   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:39:18.390318   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:39:18.390407   54458 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:39:18.390451   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:39:18.390554   54458 ssh_runner.go:195] Run: cat /version.json
	I0610 11:39:18.390581   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:39:18.393294   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.393437   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.393738   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:18.393770   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.393800   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:18.393841   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:18.393907   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:39:18.394093   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:39:18.394100   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:18.394276   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:39:18.394277   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:39:18.394462   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:39:18.394460   54458 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa Username:docker}
	I0610 11:39:18.394576   54458 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa Username:docker}
	I0610 11:39:18.479471   54458 ssh_runner.go:195] Run: systemctl --version
	I0610 11:39:18.508981   54458 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 11:39:18.679789   54458 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 11:39:18.688542   54458 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:39:18.688626   54458 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:39:18.716555   54458 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:39:18.716581   54458 start.go:494] detecting cgroup driver to use...
	I0610 11:39:18.716650   54458 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:39:18.738358   54458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:39:18.756346   54458 docker.go:217] disabling cri-docker service (if available) ...
	I0610 11:39:18.756414   54458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 11:39:18.776621   54458 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 11:39:18.796257   54458 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 11:39:18.939255   54458 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 11:39:19.125347   54458 docker.go:233] disabling docker service ...
	I0610 11:39:19.125429   54458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 11:39:19.140241   54458 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 11:39:19.154346   54458 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 11:39:19.279829   54458 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 11:39:19.422215   54458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 11:39:19.440674   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:39:19.462390   54458 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0610 11:39:19.462459   54458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:39:19.474354   54458 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 11:39:19.474420   54458 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:39:19.488132   54458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:39:19.501552   54458 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:39:19.512256   54458 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:39:19.522850   54458 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:39:19.532682   54458 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 11:39:19.532754   54458 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 11:39:19.546416   54458 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:39:19.557858   54458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:39:19.724671   54458 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 11:39:19.880919   54458 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 11:39:19.881016   54458 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 11:39:19.886726   54458 start.go:562] Will wait 60s for crictl version
	I0610 11:39:19.886791   54458 ssh_runner.go:195] Run: which crictl
	I0610 11:39:19.890667   54458 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:39:19.932714   54458 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 11:39:19.932783   54458 ssh_runner.go:195] Run: crio --version
	I0610 11:39:19.961041   54458 ssh_runner.go:195] Run: crio --version
	I0610 11:39:19.995228   54458 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0610 11:39:19.996361   54458 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetIP
	I0610 11:39:19.999923   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:20.000556   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:39:11 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:39:20.000590   54458 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:39:20.000766   54458 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0610 11:39:20.005775   54458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:39:20.019968   54458 kubeadm.go:877] updating cluster {Name:old-k8s-version-166693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-166693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.34 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:39:20.020123   54458 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0610 11:39:20.020188   54458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:39:20.056225   54458 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0610 11:39:20.056306   54458 ssh_runner.go:195] Run: which lz4
	I0610 11:39:20.061783   54458 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0610 11:39:20.067272   54458 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 11:39:20.067304   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0610 11:39:21.655441   54458 crio.go:462] duration metric: took 1.593699163s to copy over tarball
	I0610 11:39:21.655518   54458 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 11:39:24.372117   54458 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.716570519s)
	I0610 11:39:24.372141   54458 crio.go:469] duration metric: took 2.716667877s to extract the tarball
	I0610 11:39:24.372150   54458 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 11:39:24.420800   54458 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:39:24.474578   54458 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0610 11:39:24.474600   54458 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 11:39:24.474652   54458 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:39:24.474675   54458 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0610 11:39:24.474696   54458 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:39:24.474657   54458 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:39:24.474696   54458 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:39:24.474703   54458 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0610 11:39:24.474721   54458 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0610 11:39:24.474723   54458 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:39:24.476201   54458 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:39:24.476214   54458 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:39:24.476237   54458 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:39:24.476253   54458 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:39:24.476273   54458 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0610 11:39:24.476256   54458 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0610 11:39:24.476517   54458 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0610 11:39:24.476937   54458 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:39:24.679029   54458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:39:24.703751   54458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0610 11:39:24.716518   54458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:39:24.731996   54458 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0610 11:39:24.732040   54458 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:39:24.732100   54458 ssh_runner.go:195] Run: which crictl
	I0610 11:39:24.734762   54458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0610 11:39:24.734768   54458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:39:24.740586   54458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:39:24.747177   54458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0610 11:39:24.826962   54458 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0610 11:39:24.827067   54458 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:39:24.827177   54458 ssh_runner.go:195] Run: which crictl
	I0610 11:39:24.827327   54458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:39:24.827522   54458 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0610 11:39:24.827551   54458 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0610 11:39:24.827602   54458 ssh_runner.go:195] Run: which crictl
	I0610 11:39:24.945855   54458 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0610 11:39:24.945890   54458 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0610 11:39:24.945900   54458 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:39:24.945924   54458 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0610 11:39:24.945974   54458 ssh_runner.go:195] Run: which crictl
	I0610 11:39:24.945991   54458 ssh_runner.go:195] Run: which crictl
	I0610 11:39:24.946037   54458 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0610 11:39:24.946049   54458 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:39:24.946084   54458 ssh_runner.go:195] Run: which crictl
	I0610 11:39:24.949105   54458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:39:24.949137   54458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0610 11:39:24.949196   54458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0610 11:39:24.949234   54458 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0610 11:39:24.949261   54458 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0610 11:39:24.949294   54458 ssh_runner.go:195] Run: which crictl
	I0610 11:39:24.954534   54458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:39:24.954776   54458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0610 11:39:24.959875   54458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:39:25.077921   54458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0610 11:39:25.078119   54458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0610 11:39:25.078177   54458 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0610 11:39:25.108019   54458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0610 11:39:25.119762   54458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0610 11:39:25.119867   54458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0610 11:39:25.137332   54458 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0610 11:39:25.282378   54458 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:39:25.422491   54458 cache_images.go:92] duration metric: took 947.868008ms to LoadCachedImages
	W0610 11:39:25.422598   54458 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0610 11:39:25.422614   54458 kubeadm.go:928] updating node { 192.168.72.34 8443 v1.20.0 crio true true} ...
	I0610 11:39:25.422741   54458 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-166693 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-166693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:39:25.422827   54458 ssh_runner.go:195] Run: crio config
	I0610 11:39:25.479534   54458 cni.go:84] Creating CNI manager for ""
	I0610 11:39:25.479553   54458 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:39:25.479561   54458 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:39:25.479579   54458 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.34 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-166693 NodeName:old-k8s-version-166693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0610 11:39:25.479684   54458 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-166693"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:39:25.479739   54458 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0610 11:39:25.489354   54458 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:39:25.489422   54458 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 11:39:25.498346   54458 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0610 11:39:25.514614   54458 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:39:25.531276   54458 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0610 11:39:25.547540   54458 ssh_runner.go:195] Run: grep 192.168.72.34	control-plane.minikube.internal$ /etc/hosts
	I0610 11:39:25.551381   54458 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:39:25.563609   54458 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:39:25.680489   54458 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:39:25.697196   54458 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693 for IP: 192.168.72.34
	I0610 11:39:25.697220   54458 certs.go:194] generating shared ca certs ...
	I0610 11:39:25.697241   54458 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:39:25.697421   54458 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 11:39:25.697478   54458 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 11:39:25.697492   54458 certs.go:256] generating profile certs ...
	I0610 11:39:25.697557   54458 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.key
	I0610 11:39:25.697577   54458 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt with IP's: []
	I0610 11:39:25.796706   54458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt ...
	I0610 11:39:25.796740   54458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: {Name:mk9f49c283570761dc0ac45c040716b87387ff64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:39:25.796973   54458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.key ...
	I0610 11:39:25.796995   54458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.key: {Name:mk196ce44b078353267aa91f95d26828f0700052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:39:25.797124   54458 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.key.1a4331fb
	I0610 11:39:25.797145   54458 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.crt.1a4331fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.34]
	I0610 11:39:25.943212   54458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.crt.1a4331fb ...
	I0610 11:39:25.943244   54458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.crt.1a4331fb: {Name:mkbf093ffb79ac01bda18aa3c62c667bd8b9a373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:39:25.943423   54458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.key.1a4331fb ...
	I0610 11:39:25.943440   54458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.key.1a4331fb: {Name:mk33db107fe1f46ff58e0537986121d3f4d9588f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:39:25.943547   54458 certs.go:381] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.crt.1a4331fb -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.crt
	I0610 11:39:25.943638   54458 certs.go:385] copying /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.key.1a4331fb -> /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.key
	I0610 11:39:25.943717   54458 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/proxy-client.key
	I0610 11:39:25.943740   54458 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/proxy-client.crt with IP's: []
	I0610 11:39:26.117325   54458 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/proxy-client.crt ...
	I0610 11:39:26.117360   54458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/proxy-client.crt: {Name:mk0a9c0731b39960ae000ed266bdecb2cdc31891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:39:26.117532   54458 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/proxy-client.key ...
	I0610 11:39:26.117549   54458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/proxy-client.key: {Name:mk7c2a4f28435ac6a6746e66232db80b4ea3037f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:39:26.117785   54458 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 11:39:26.117835   54458 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 11:39:26.117846   54458 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 11:39:26.117878   54458 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 11:39:26.117942   54458 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 11:39:26.117976   54458 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 11:39:26.118030   54458 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:39:26.118628   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:39:26.143703   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:39:26.166305   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:39:26.189834   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 11:39:26.216798   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0610 11:39:26.245996   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 11:39:26.269970   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:39:26.296058   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:39:26.321663   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:39:26.347626   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 11:39:26.370998   54458 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 11:39:26.393984   54458 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:39:26.410034   54458 ssh_runner.go:195] Run: openssl version
	I0610 11:39:26.416080   54458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 11:39:26.426884   54458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 11:39:26.431574   54458 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:39:26.431631   54458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 11:39:26.437520   54458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 11:39:26.447672   54458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 11:39:26.457909   54458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 11:39:26.462132   54458 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:39:26.462182   54458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 11:39:26.467589   54458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:39:26.478036   54458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:39:26.488354   54458 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:39:26.492684   54458 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:39:26.492748   54458 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:39:26.498436   54458 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:39:26.511686   54458 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:39:26.516263   54458 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0610 11:39:26.516323   54458 kubeadm.go:391] StartCluster: {Name:old-k8s-version-166693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-166693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.34 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:39:26.516417   54458 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 11:39:26.516470   54458 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:39:26.572349   54458 cri.go:89] found id: ""
	I0610 11:39:26.572422   54458 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 11:39:26.591993   54458 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:39:26.609282   54458 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:39:26.619362   54458 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:39:26.619386   54458 kubeadm.go:156] found existing configuration files:
	
	I0610 11:39:26.619434   54458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:39:26.628268   54458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:39:26.628336   54458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:39:26.637543   54458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:39:26.646660   54458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:39:26.646723   54458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:39:26.655929   54458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:39:26.665778   54458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:39:26.665849   54458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:39:26.678576   54458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:39:26.690523   54458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:39:26.690579   54458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:39:26.700471   54458 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:39:26.964185   54458 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:41:24.625234   54458 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:41:24.625355   54458 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0610 11:41:24.626776   54458 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:41:24.626834   54458 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:41:24.626926   54458 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:41:24.627063   54458 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:41:24.627192   54458 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:41:24.627302   54458 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:41:24.628967   54458 out.go:204]   - Generating certificates and keys ...
	I0610 11:41:24.629065   54458 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:41:24.629152   54458 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:41:24.629252   54458 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 11:41:24.629352   54458 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 11:41:24.629436   54458 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 11:41:24.629518   54458 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 11:41:24.629594   54458 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 11:41:24.629748   54458 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-166693] and IPs [192.168.72.34 127.0.0.1 ::1]
	I0610 11:41:24.629833   54458 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 11:41:24.629990   54458 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-166693] and IPs [192.168.72.34 127.0.0.1 ::1]
	I0610 11:41:24.630066   54458 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 11:41:24.630115   54458 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 11:41:24.630156   54458 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 11:41:24.630214   54458 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:41:24.630300   54458 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:41:24.630369   54458 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:41:24.630456   54458 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:41:24.630543   54458 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:41:24.630669   54458 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:41:24.630785   54458 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:41:24.630841   54458 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:41:24.630928   54458 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:41:24.632284   54458 out.go:204]   - Booting up control plane ...
	I0610 11:41:24.632394   54458 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:41:24.632498   54458 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:41:24.632588   54458 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:41:24.632700   54458 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:41:24.632927   54458 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:41:24.633021   54458 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:41:24.633130   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:41:24.633300   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:41:24.633402   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:41:24.633598   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:41:24.633701   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:41:24.633932   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:41:24.634022   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:41:24.634207   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:41:24.634279   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:41:24.634430   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:41:24.634441   54458 kubeadm.go:309] 
	I0610 11:41:24.634494   54458 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:41:24.634529   54458 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:41:24.634537   54458 kubeadm.go:309] 
	I0610 11:41:24.634596   54458 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:41:24.634639   54458 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:41:24.634760   54458 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:41:24.634772   54458 kubeadm.go:309] 
	I0610 11:41:24.634875   54458 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:41:24.634928   54458 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:41:24.634977   54458 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:41:24.634991   54458 kubeadm.go:309] 
	I0610 11:41:24.635159   54458 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:41:24.635278   54458 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:41:24.635290   54458 kubeadm.go:309] 
	I0610 11:41:24.635437   54458 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:41:24.635562   54458 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:41:24.635671   54458 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:41:24.635774   54458 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:41:24.635797   54458 kubeadm.go:309] 
	W0610 11:41:24.635890   54458 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-166693] and IPs [192.168.72.34 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-166693] and IPs [192.168.72.34 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-166693] and IPs [192.168.72.34 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-166693] and IPs [192.168.72.34 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0610 11:41:24.635934   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:41:25.109914   54458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:41:25.126482   54458 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:41:25.137771   54458 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:41:25.137796   54458 kubeadm.go:156] found existing configuration files:
	
	I0610 11:41:25.137844   54458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:41:25.146851   54458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:41:25.146909   54458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:41:25.156146   54458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:41:25.164600   54458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:41:25.164662   54458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:41:25.173676   54458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:41:25.182386   54458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:41:25.182435   54458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:41:25.191631   54458 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:41:25.200892   54458 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:41:25.200978   54458 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:41:25.210107   54458 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:41:25.430352   54458 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:43:21.578083   54458 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:43:21.578196   54458 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0610 11:43:21.579875   54458 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:43:21.579936   54458 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:43:21.580027   54458 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:43:21.580111   54458 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:43:21.580225   54458 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:43:21.580305   54458 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:43:21.582111   54458 out.go:204]   - Generating certificates and keys ...
	I0610 11:43:21.582186   54458 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:43:21.582243   54458 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:43:21.582349   54458 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:43:21.582436   54458 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:43:21.582530   54458 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:43:21.582616   54458 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:43:21.582704   54458 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:43:21.582789   54458 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:43:21.582892   54458 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:43:21.582993   54458 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:43:21.583048   54458 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:43:21.583124   54458 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:43:21.583221   54458 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:43:21.583286   54458 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:43:21.583352   54458 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:43:21.583400   54458 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:43:21.583526   54458 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:43:21.583624   54458 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:43:21.583681   54458 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:43:21.583775   54458 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:43:21.585365   54458 out.go:204]   - Booting up control plane ...
	I0610 11:43:21.585466   54458 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:43:21.585553   54458 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:43:21.585629   54458 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:43:21.585725   54458 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:43:21.585918   54458 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:43:21.585967   54458 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:43:21.586043   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:43:21.586248   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:43:21.586346   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:43:21.586561   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:43:21.586621   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:43:21.586809   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:43:21.586903   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:43:21.587107   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:43:21.587169   54458 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:43:21.587329   54458 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:43:21.587336   54458 kubeadm.go:309] 
	I0610 11:43:21.587369   54458 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:43:21.587409   54458 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:43:21.587423   54458 kubeadm.go:309] 
	I0610 11:43:21.587474   54458 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:43:21.587504   54458 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:43:21.587594   54458 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:43:21.587602   54458 kubeadm.go:309] 
	I0610 11:43:21.587707   54458 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:43:21.587763   54458 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:43:21.587810   54458 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:43:21.587821   54458 kubeadm.go:309] 
	I0610 11:43:21.587904   54458 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:43:21.587972   54458 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:43:21.587981   54458 kubeadm.go:309] 
	I0610 11:43:21.588126   54458 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:43:21.588238   54458 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:43:21.588341   54458 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:43:21.588419   54458 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:43:21.588442   54458 kubeadm.go:309] 
	I0610 11:43:21.588497   54458 kubeadm.go:393] duration metric: took 3m55.072180081s to StartCluster
	I0610 11:43:21.588544   54458 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:43:21.588598   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:43:21.630767   54458 cri.go:89] found id: ""
	I0610 11:43:21.630799   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.630810   54458 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:43:21.630817   54458 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:43:21.630889   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:43:21.669373   54458 cri.go:89] found id: ""
	I0610 11:43:21.669402   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.669410   54458 logs.go:278] No container was found matching "etcd"
	I0610 11:43:21.669423   54458 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:43:21.669472   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:43:21.701513   54458 cri.go:89] found id: ""
	I0610 11:43:21.701545   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.701556   54458 logs.go:278] No container was found matching "coredns"
	I0610 11:43:21.701562   54458 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:43:21.701631   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:43:21.734867   54458 cri.go:89] found id: ""
	I0610 11:43:21.734902   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.734910   54458 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:43:21.734916   54458 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:43:21.734972   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:43:21.770768   54458 cri.go:89] found id: ""
	I0610 11:43:21.770798   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.770806   54458 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:43:21.770812   54458 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:43:21.770861   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:43:21.803556   54458 cri.go:89] found id: ""
	I0610 11:43:21.803578   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.803586   54458 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:43:21.803594   54458 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:43:21.803658   54458 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:43:21.836121   54458 cri.go:89] found id: ""
	I0610 11:43:21.836162   54458 logs.go:276] 0 containers: []
	W0610 11:43:21.836171   54458 logs.go:278] No container was found matching "kindnet"
	I0610 11:43:21.836192   54458 logs.go:123] Gathering logs for kubelet ...
	I0610 11:43:21.836206   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:43:21.885236   54458 logs.go:123] Gathering logs for dmesg ...
	I0610 11:43:21.885272   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:43:21.898167   54458 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:43:21.898197   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:43:22.006483   54458 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:43:22.006510   54458 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:43:22.006527   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:43:22.095983   54458 logs.go:123] Gathering logs for container status ...
	I0610 11:43:22.096022   54458 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0610 11:43:22.142642   54458 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0610 11:43:22.142699   54458 out.go:239] * 
	* 
	W0610 11:43:22.142772   54458 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:43:22.142804   54458 out.go:239] * 
	* 
	W0610 11:43:22.144023   54458 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:43:22.147617   54458 out.go:177] 
	W0610 11:43:22.149005   54458 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:43:22.149072   54458 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0610 11:43:22.149099   54458 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0610 11:43:22.150533   54458 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-166693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 6 (232.32396ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:43:22.420735   56895 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-166693" does not appear in /home/jenkins/minikube-integration/19046-3880/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-166693" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (265.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-832735 --alsologtostderr -v=3
E0610 11:40:35.502581   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-832735 --alsologtostderr -v=3: exit status 82 (2m0.519026486s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-832735"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 11:40:28.428049   55727 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:40:28.428325   55727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:40:28.428336   55727 out.go:304] Setting ErrFile to fd 2...
	I0610 11:40:28.428341   55727 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:40:28.428520   55727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:40:28.428738   55727 out.go:298] Setting JSON to false
	I0610 11:40:28.428808   55727 mustload.go:65] Loading cluster: embed-certs-832735
	I0610 11:40:28.429166   55727 config.go:182] Loaded profile config "embed-certs-832735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:40:28.429229   55727 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/embed-certs-832735/config.json ...
	I0610 11:40:28.429391   55727 mustload.go:65] Loading cluster: embed-certs-832735
	I0610 11:40:28.429485   55727 config.go:182] Loaded profile config "embed-certs-832735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:40:28.429507   55727 stop.go:39] StopHost: embed-certs-832735
	I0610 11:40:28.429863   55727 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:40:28.429905   55727 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:40:28.444454   55727 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33781
	I0610 11:40:28.444896   55727 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:40:28.445491   55727 main.go:141] libmachine: Using API Version  1
	I0610 11:40:28.445515   55727 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:40:28.445863   55727 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:40:28.449008   55727 out.go:177] * Stopping node "embed-certs-832735"  ...
	I0610 11:40:28.450404   55727 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0610 11:40:28.450449   55727 main.go:141] libmachine: (embed-certs-832735) Calling .DriverName
	I0610 11:40:28.450677   55727 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0610 11:40:28.450698   55727 main.go:141] libmachine: (embed-certs-832735) Calling .GetSSHHostname
	I0610 11:40:28.453496   55727 main.go:141] libmachine: (embed-certs-832735) DBG | domain embed-certs-832735 has defined MAC address 52:54:00:db:f7:d7 in network mk-embed-certs-832735
	I0610 11:40:28.453870   55727 main.go:141] libmachine: (embed-certs-832735) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:f7:d7", ip: ""} in network mk-embed-certs-832735: {Iface:virbr4 ExpiryTime:2024-06-10 12:39:33 +0000 UTC Type:0 Mac:52:54:00:db:f7:d7 Iaid: IPaddr:192.168.61.19 Prefix:24 Hostname:embed-certs-832735 Clientid:01:52:54:00:db:f7:d7}
	I0610 11:40:28.453906   55727 main.go:141] libmachine: (embed-certs-832735) DBG | domain embed-certs-832735 has defined IP address 192.168.61.19 and MAC address 52:54:00:db:f7:d7 in network mk-embed-certs-832735
	I0610 11:40:28.454052   55727 main.go:141] libmachine: (embed-certs-832735) Calling .GetSSHPort
	I0610 11:40:28.454223   55727 main.go:141] libmachine: (embed-certs-832735) Calling .GetSSHKeyPath
	I0610 11:40:28.454382   55727 main.go:141] libmachine: (embed-certs-832735) Calling .GetSSHUsername
	I0610 11:40:28.454531   55727 sshutil.go:53] new ssh client: &{IP:192.168.61.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/embed-certs-832735/id_rsa Username:docker}
	I0610 11:40:28.561706   55727 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0610 11:40:28.624366   55727 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0610 11:40:28.691577   55727 main.go:141] libmachine: Stopping "embed-certs-832735"...
	I0610 11:40:28.691621   55727 main.go:141] libmachine: (embed-certs-832735) Calling .GetState
	I0610 11:40:28.693437   55727 main.go:141] libmachine: (embed-certs-832735) Calling .Stop
	I0610 11:40:28.697063   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 0/120
	I0610 11:40:29.698427   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 1/120
	I0610 11:40:30.699987   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 2/120
	I0610 11:40:31.701608   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 3/120
	I0610 11:40:32.703215   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 4/120
	I0610 11:40:33.705337   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 5/120
	I0610 11:40:34.707449   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 6/120
	I0610 11:40:35.709398   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 7/120
	I0610 11:40:36.711880   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 8/120
	I0610 11:40:37.713424   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 9/120
	I0610 11:40:38.715869   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 10/120
	I0610 11:40:39.717206   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 11/120
	I0610 11:40:40.718662   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 12/120
	I0610 11:40:41.720053   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 13/120
	I0610 11:40:42.721450   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 14/120
	I0610 11:40:43.723483   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 15/120
	I0610 11:40:44.725038   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 16/120
	I0610 11:40:45.726535   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 17/120
	I0610 11:40:46.728003   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 18/120
	I0610 11:40:47.729601   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 19/120
	I0610 11:40:48.731774   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 20/120
	I0610 11:40:49.733175   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 21/120
	I0610 11:40:50.735199   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 22/120
	I0610 11:40:51.736788   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 23/120
	I0610 11:40:52.738655   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 24/120
	I0610 11:40:53.740980   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 25/120
	I0610 11:40:54.742242   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 26/120
	I0610 11:40:55.743733   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 27/120
	I0610 11:40:56.745716   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 28/120
	I0610 11:40:57.747011   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 29/120
	I0610 11:40:58.748433   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 30/120
	I0610 11:40:59.749935   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 31/120
	I0610 11:41:00.751324   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 32/120
	I0610 11:41:01.752810   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 33/120
	I0610 11:41:02.754158   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 34/120
	I0610 11:41:03.756204   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 35/120
	I0610 11:41:04.757488   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 36/120
	I0610 11:41:05.759055   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 37/120
	I0610 11:41:06.760537   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 38/120
	I0610 11:41:07.762052   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 39/120
	I0610 11:41:08.764435   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 40/120
	I0610 11:41:09.765940   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 41/120
	I0610 11:41:10.767462   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 42/120
	I0610 11:41:11.769086   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 43/120
	I0610 11:41:12.770730   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 44/120
	I0610 11:41:13.772735   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 45/120
	I0610 11:41:14.774090   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 46/120
	I0610 11:41:15.775430   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 47/120
	I0610 11:41:16.777037   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 48/120
	I0610 11:41:17.778363   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 49/120
	I0610 11:41:18.780753   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 50/120
	I0610 11:41:19.782089   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 51/120
	I0610 11:41:20.783405   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 52/120
	I0610 11:41:21.784854   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 53/120
	I0610 11:41:22.786248   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 54/120
	I0610 11:41:23.788275   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 55/120
	I0610 11:41:24.789864   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 56/120
	I0610 11:41:25.791255   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 57/120
	I0610 11:41:26.792792   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 58/120
	I0610 11:41:27.794418   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 59/120
	I0610 11:41:28.796630   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 60/120
	I0610 11:41:29.798250   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 61/120
	I0610 11:41:30.799560   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 62/120
	I0610 11:41:31.801061   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 63/120
	I0610 11:41:32.802600   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 64/120
	I0610 11:41:33.804519   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 65/120
	I0610 11:41:34.806088   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 66/120
	I0610 11:41:35.807858   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 67/120
	I0610 11:41:36.809672   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 68/120
	I0610 11:41:37.811533   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 69/120
	I0610 11:41:38.813692   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 70/120
	I0610 11:41:39.815099   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 71/120
	I0610 11:41:40.816671   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 72/120
	I0610 11:41:41.818293   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 73/120
	I0610 11:41:42.819804   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 74/120
	I0610 11:41:43.821741   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 75/120
	I0610 11:41:44.823187   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 76/120
	I0610 11:41:45.824691   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 77/120
	I0610 11:41:46.826156   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 78/120
	I0610 11:41:47.827593   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 79/120
	I0610 11:41:48.829554   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 80/120
	I0610 11:41:49.831043   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 81/120
	I0610 11:41:50.832273   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 82/120
	I0610 11:41:51.833768   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 83/120
	I0610 11:41:52.835222   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 84/120
	I0610 11:41:53.837477   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 85/120
	I0610 11:41:54.839075   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 86/120
	I0610 11:41:55.840588   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 87/120
	I0610 11:41:56.842209   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 88/120
	I0610 11:41:57.843555   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 89/120
	I0610 11:41:58.845732   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 90/120
	I0610 11:41:59.847200   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 91/120
	I0610 11:42:00.848924   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 92/120
	I0610 11:42:01.850391   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 93/120
	I0610 11:42:02.851797   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 94/120
	I0610 11:42:03.853657   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 95/120
	I0610 11:42:04.855478   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 96/120
	I0610 11:42:05.856656   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 97/120
	I0610 11:42:06.858276   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 98/120
	I0610 11:42:07.859839   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 99/120
	I0610 11:42:08.861955   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 100/120
	I0610 11:42:09.863583   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 101/120
	I0610 11:42:10.864961   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 102/120
	I0610 11:42:11.866297   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 103/120
	I0610 11:42:12.867881   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 104/120
	I0610 11:42:13.869412   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 105/120
	I0610 11:42:14.870988   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 106/120
	I0610 11:42:15.873171   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 107/120
	I0610 11:42:16.874519   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 108/120
	I0610 11:42:17.875871   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 109/120
	I0610 11:42:18.878352   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 110/120
	I0610 11:42:19.879972   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 111/120
	I0610 11:42:20.881322   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 112/120
	I0610 11:42:21.883571   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 113/120
	I0610 11:42:22.884942   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 114/120
	I0610 11:42:23.886614   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 115/120
	I0610 11:42:24.889305   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 116/120
	I0610 11:42:25.890575   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 117/120
	I0610 11:42:26.892454   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 118/120
	I0610 11:42:27.893885   55727 main.go:141] libmachine: (embed-certs-832735) Waiting for machine to stop 119/120
	I0610 11:42:28.894476   55727 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0610 11:42:28.894521   55727 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0610 11:42:28.896488   55727 out.go:177] 
	W0610 11:42:28.898052   55727 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0610 11:42:28.898070   55727 out.go:239] * 
	* 
	W0610 11:42:28.900855   55727 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:42:28.902255   55727 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-832735 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832735 -n embed-certs-832735
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832735 -n embed-certs-832735: exit status 3 (18.586337293s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:42:47.489361   56556 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.19:22: connect: no route to host
	E0610 11:42:47.489382   56556 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.19:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-832735" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-298179 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-298179 --alsologtostderr -v=3: exit status 82 (2m0.541636803s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-298179"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 11:41:41.085636   56145 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:41:41.085773   56145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:41:41.085785   56145 out.go:304] Setting ErrFile to fd 2...
	I0610 11:41:41.085792   56145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:41:41.086113   56145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:41:41.086474   56145 out.go:298] Setting JSON to false
	I0610 11:41:41.086569   56145 mustload.go:65] Loading cluster: no-preload-298179
	I0610 11:41:41.086909   56145 config.go:182] Loaded profile config "no-preload-298179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:41:41.086974   56145 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/config.json ...
	I0610 11:41:41.087174   56145 mustload.go:65] Loading cluster: no-preload-298179
	I0610 11:41:41.087294   56145 config.go:182] Loaded profile config "no-preload-298179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:41:41.087327   56145 stop.go:39] StopHost: no-preload-298179
	I0610 11:41:41.087765   56145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:41:41.087814   56145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:41:41.106386   56145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43475
	I0610 11:41:41.107001   56145 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:41:41.107557   56145 main.go:141] libmachine: Using API Version  1
	I0610 11:41:41.107579   56145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:41:41.107969   56145 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:41:41.110289   56145 out.go:177] * Stopping node "no-preload-298179"  ...
	I0610 11:41:41.111551   56145 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0610 11:41:41.111575   56145 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:41:41.111917   56145 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0610 11:41:41.111941   56145 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:41:41.115205   56145 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:41:41.115389   56145 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:39:55 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:41:41.115425   56145 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:41:41.115626   56145 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:41:41.115807   56145 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:41:41.115967   56145 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:41:41.116119   56145 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:41:41.226376   56145 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0610 11:41:41.300033   56145 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0610 11:41:41.364305   56145 main.go:141] libmachine: Stopping "no-preload-298179"...
	I0610 11:41:41.364336   56145 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:41:41.366176   56145 main.go:141] libmachine: (no-preload-298179) Calling .Stop
	I0610 11:41:41.369951   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 0/120
	I0610 11:41:42.371498   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 1/120
	I0610 11:41:43.373012   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 2/120
	I0610 11:41:44.374510   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 3/120
	I0610 11:41:45.375879   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 4/120
	I0610 11:41:46.377465   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 5/120
	I0610 11:41:47.378894   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 6/120
	I0610 11:41:48.380348   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 7/120
	I0610 11:41:49.381874   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 8/120
	I0610 11:41:50.383259   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 9/120
	I0610 11:41:51.384644   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 10/120
	I0610 11:41:52.386052   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 11/120
	I0610 11:41:53.387415   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 12/120
	I0610 11:41:54.388767   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 13/120
	I0610 11:41:55.390469   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 14/120
	I0610 11:41:56.392624   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 15/120
	I0610 11:41:57.394366   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 16/120
	I0610 11:41:58.395888   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 17/120
	I0610 11:41:59.397367   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 18/120
	I0610 11:42:00.399715   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 19/120
	I0610 11:42:01.401366   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 20/120
	I0610 11:42:02.403530   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 21/120
	I0610 11:42:03.405243   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 22/120
	I0610 11:42:04.407495   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 23/120
	I0610 11:42:05.408933   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 24/120
	I0610 11:42:06.410561   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 25/120
	I0610 11:42:07.412369   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 26/120
	I0610 11:42:08.414017   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 27/120
	I0610 11:42:09.415324   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 28/120
	I0610 11:42:10.416742   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 29/120
	I0610 11:42:11.419060   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 30/120
	I0610 11:42:12.420448   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 31/120
	I0610 11:42:13.422074   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 32/120
	I0610 11:42:14.423573   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 33/120
	I0610 11:42:15.424929   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 34/120
	I0610 11:42:16.426469   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 35/120
	I0610 11:42:17.427883   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 36/120
	I0610 11:42:18.429333   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 37/120
	I0610 11:42:19.430943   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 38/120
	I0610 11:42:20.432532   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 39/120
	I0610 11:42:21.434862   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 40/120
	I0610 11:42:22.436261   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 41/120
	I0610 11:42:23.437720   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 42/120
	I0610 11:42:24.439479   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 43/120
	I0610 11:42:25.440746   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 44/120
	I0610 11:42:26.442866   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 45/120
	I0610 11:42:27.444212   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 46/120
	I0610 11:42:28.445560   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 47/120
	I0610 11:42:29.447506   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 48/120
	I0610 11:42:30.449116   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 49/120
	I0610 11:42:31.451490   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 50/120
	I0610 11:42:32.453139   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 51/120
	I0610 11:42:33.454745   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 52/120
	I0610 11:42:34.456361   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 53/120
	I0610 11:42:35.458109   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 54/120
	I0610 11:42:36.460676   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 55/120
	I0610 11:42:37.462121   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 56/120
	I0610 11:42:38.463764   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 57/120
	I0610 11:42:39.465501   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 58/120
	I0610 11:42:40.466944   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 59/120
	I0610 11:42:41.469331   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 60/120
	I0610 11:42:42.470737   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 61/120
	I0610 11:42:43.472046   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 62/120
	I0610 11:42:44.473763   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 63/120
	I0610 11:42:45.475267   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 64/120
	I0610 11:42:46.477609   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 65/120
	I0610 11:42:47.479163   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 66/120
	I0610 11:42:48.480587   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 67/120
	I0610 11:42:49.482122   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 68/120
	I0610 11:42:50.483571   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 69/120
	I0610 11:42:51.485960   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 70/120
	I0610 11:42:52.487409   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 71/120
	I0610 11:42:53.489030   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 72/120
	I0610 11:42:54.490657   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 73/120
	I0610 11:42:55.492190   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 74/120
	I0610 11:42:56.494452   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 75/120
	I0610 11:42:57.495852   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 76/120
	I0610 11:42:58.497376   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 77/120
	I0610 11:42:59.498801   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 78/120
	I0610 11:43:00.500166   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 79/120
	I0610 11:43:01.501671   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 80/120
	I0610 11:43:02.503220   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 81/120
	I0610 11:43:03.504899   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 82/120
	I0610 11:43:04.506389   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 83/120
	I0610 11:43:05.508034   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 84/120
	I0610 11:43:06.510322   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 85/120
	I0610 11:43:07.511872   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 86/120
	I0610 11:43:08.513279   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 87/120
	I0610 11:43:09.514650   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 88/120
	I0610 11:43:10.516196   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 89/120
	I0610 11:43:11.517793   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 90/120
	I0610 11:43:12.519334   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 91/120
	I0610 11:43:13.520984   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 92/120
	I0610 11:43:14.522427   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 93/120
	I0610 11:43:15.523926   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 94/120
	I0610 11:43:16.526270   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 95/120
	I0610 11:43:17.527735   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 96/120
	I0610 11:43:18.529585   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 97/120
	I0610 11:43:19.530953   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 98/120
	I0610 11:43:20.532824   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 99/120
	I0610 11:43:21.534851   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 100/120
	I0610 11:43:22.536226   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 101/120
	I0610 11:43:23.537591   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 102/120
	I0610 11:43:24.538927   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 103/120
	I0610 11:43:25.540416   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 104/120
	I0610 11:43:26.542833   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 105/120
	I0610 11:43:27.544419   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 106/120
	I0610 11:43:28.546583   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 107/120
	I0610 11:43:29.548170   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 108/120
	I0610 11:43:30.549645   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 109/120
	I0610 11:43:31.552109   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 110/120
	I0610 11:43:32.554354   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 111/120
	I0610 11:43:33.556284   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 112/120
	I0610 11:43:34.557955   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 113/120
	I0610 11:43:35.559613   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 114/120
	I0610 11:43:36.561685   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 115/120
	I0610 11:43:37.563803   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 116/120
	I0610 11:43:38.565353   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 117/120
	I0610 11:43:39.566778   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 118/120
	I0610 11:43:40.568418   56145 main.go:141] libmachine: (no-preload-298179) Waiting for machine to stop 119/120
	I0610 11:43:41.569325   56145 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0610 11:43:41.569380   56145 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0610 11:43:41.571417   56145 out.go:177] 
	W0610 11:43:41.573099   56145 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0610 11:43:41.573131   56145 out.go:239] * 
	* 
	W0610 11:43:41.575763   56145 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:43:41.577055   56145 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-298179 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298179 -n no-preload-298179
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298179 -n no-preload-298179: exit status 3 (18.616043068s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:44:00.193366   57108 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0610 11:44:00.193388   57108 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-298179" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832735 -n embed-certs-832735
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832735 -n embed-certs-832735: exit status 3 (3.16791207s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:42:50.657383   56651 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.19:22: connect: no route to host
	E0610 11:42:50.657405   56651 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.19:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-832735 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-832735 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15299905s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.19:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-832735 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832735 -n embed-certs-832735
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832735 -n embed-certs-832735: exit status 3 (3.062527666s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:42:59.873382   56739 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.19:22: connect: no route to host
	E0610 11:42:59.873407   56739 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.19:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-832735" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-166693 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-166693 create -f testdata/busybox.yaml: exit status 1 (43.618967ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-166693" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-166693 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 6 (223.898688ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:43:22.690877   56936 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-166693" does not appear in /home/jenkins/minikube-integration/19046-3880/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-166693" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 6 (219.051228ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:43:22.910370   56966 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-166693" does not appear in /home/jenkins/minikube-integration/19046-3880/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-166693" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (102.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-166693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-166693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m42.557615397s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-166693 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-166693 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-166693 describe deploy/metrics-server -n kube-system: exit status 1 (44.487124ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-166693" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-166693 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 6 (218.115375ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:45:05.729280   57801 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-166693" does not appear in /home/jenkins/minikube-integration/19046-3880/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-166693" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (102.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298179 -n no-preload-298179
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298179 -n no-preload-298179: exit status 3 (3.167244595s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:44:03.361277   57191 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0610 11:44:03.361299   57191 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-298179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-298179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152281275s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-298179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298179 -n no-preload-298179
E0610 11:44:12.453076   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298179 -n no-preload-298179: exit status 3 (3.064092587s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:44:12.577393   57526 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0610 11:44:12.577413   57526 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-298179" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (697.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-166693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0610 11:46:57.913622   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 11:48:20.961267   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-166693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m35.566643151s)

                                                
                                                
-- stdout --
	* [old-k8s-version-166693] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-166693" primary control-plane node in "old-k8s-version-166693" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-166693" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 11:45:12.253047   57945 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:45:12.253312   57945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:45:12.253327   57945 out.go:304] Setting ErrFile to fd 2...
	I0610 11:45:12.253368   57945 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:45:12.253820   57945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:45:12.254691   57945 out.go:298] Setting JSON to false
	I0610 11:45:12.255564   57945 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5253,"bootTime":1718014659,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:45:12.255622   57945 start.go:139] virtualization: kvm guest
	I0610 11:45:12.257395   57945 out.go:177] * [old-k8s-version-166693] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:45:12.259309   57945 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:45:12.259309   57945 notify.go:220] Checking for updates...
	I0610 11:45:12.260543   57945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:45:12.261890   57945 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:45:12.263198   57945 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:45:12.264610   57945 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:45:12.265903   57945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:45:12.267692   57945 config.go:182] Loaded profile config "old-k8s-version-166693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0610 11:45:12.268188   57945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:45:12.268245   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:45:12.282809   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41449
	I0610 11:45:12.283315   57945 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:45:12.283953   57945 main.go:141] libmachine: Using API Version  1
	I0610 11:45:12.283980   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:45:12.284359   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:45:12.284510   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:45:12.286312   57945 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0610 11:45:12.287545   57945 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:45:12.287874   57945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:45:12.287919   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:45:12.302285   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45733
	I0610 11:45:12.302709   57945 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:45:12.303218   57945 main.go:141] libmachine: Using API Version  1
	I0610 11:45:12.303239   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:45:12.303602   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:45:12.303773   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:45:12.340929   57945 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 11:45:12.342216   57945 start.go:297] selected driver: kvm2
	I0610 11:45:12.342229   57945 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-166693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-166693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.34 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:45:12.342365   57945 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:45:12.343096   57945 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:45:12.343182   57945 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 11:45:12.358392   57945 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 11:45:12.358920   57945 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:45:12.358992   57945 cni.go:84] Creating CNI manager for ""
	I0610 11:45:12.359010   57945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:45:12.359072   57945 start.go:340] cluster config:
	{Name:old-k8s-version-166693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-166693 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.34 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:45:12.359217   57945 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:45:12.361153   57945 out.go:177] * Starting "old-k8s-version-166693" primary control-plane node in "old-k8s-version-166693" cluster
	I0610 11:45:12.362491   57945 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0610 11:45:12.362535   57945 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0610 11:45:12.362543   57945 cache.go:56] Caching tarball of preloaded images
	I0610 11:45:12.362626   57945 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:45:12.362643   57945 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0610 11:45:12.362775   57945 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/config.json ...
	I0610 11:45:12.363012   57945 start.go:360] acquireMachinesLock for old-k8s-version-166693: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:48:16.301490   57945 start.go:364] duration metric: took 3m3.938440329s to acquireMachinesLock for "old-k8s-version-166693"
	I0610 11:48:16.301553   57945 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:48:16.301561   57945 fix.go:54] fixHost starting: 
	I0610 11:48:16.301892   57945 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:48:16.301929   57945 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:48:16.318433   57945 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0610 11:48:16.318850   57945 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:48:16.319413   57945 main.go:141] libmachine: Using API Version  1
	I0610 11:48:16.319442   57945 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:48:16.319742   57945 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:48:16.319896   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:48:16.320071   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetState
	I0610 11:48:16.321399   57945 fix.go:112] recreateIfNeeded on old-k8s-version-166693: state=Stopped err=<nil>
	I0610 11:48:16.321425   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	W0610 11:48:16.321589   57945 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:48:16.323615   57945 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-166693" ...
	I0610 11:48:16.324789   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .Start
	I0610 11:48:16.324981   57945 main.go:141] libmachine: (old-k8s-version-166693) Ensuring networks are active...
	I0610 11:48:16.325682   57945 main.go:141] libmachine: (old-k8s-version-166693) Ensuring network default is active
	I0610 11:48:16.326014   57945 main.go:141] libmachine: (old-k8s-version-166693) Ensuring network mk-old-k8s-version-166693 is active
	I0610 11:48:16.326421   57945 main.go:141] libmachine: (old-k8s-version-166693) Getting domain xml...
	I0610 11:48:16.327154   57945 main.go:141] libmachine: (old-k8s-version-166693) Creating domain...
	I0610 11:48:17.606112   57945 main.go:141] libmachine: (old-k8s-version-166693) Waiting to get IP...
	I0610 11:48:17.607041   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:17.607443   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:17.607515   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:17.607428   58819 retry.go:31] will retry after 206.372237ms: waiting for machine to come up
	I0610 11:48:17.816015   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:17.816418   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:17.816445   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:17.816373   58819 retry.go:31] will retry after 304.763184ms: waiting for machine to come up
	I0610 11:48:18.122927   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:18.123482   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:18.123511   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:18.123423   58819 retry.go:31] will retry after 477.244101ms: waiting for machine to come up
	I0610 11:48:18.601892   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:18.602459   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:18.602482   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:18.602394   58819 retry.go:31] will retry after 433.878943ms: waiting for machine to come up
	I0610 11:48:19.038056   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:19.038478   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:19.038508   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:19.038432   58819 retry.go:31] will retry after 581.568577ms: waiting for machine to come up
	I0610 11:48:19.621361   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:19.621902   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:19.621940   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:19.621841   58819 retry.go:31] will retry after 636.684333ms: waiting for machine to come up
	I0610 11:48:20.259700   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:20.260059   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:20.260086   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:20.260010   58819 retry.go:31] will retry after 736.918356ms: waiting for machine to come up
	I0610 11:48:20.998959   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:20.999465   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:20.999494   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:20.999410   58819 retry.go:31] will retry after 964.036479ms: waiting for machine to come up
	I0610 11:48:21.964645   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:21.965095   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:21.965119   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:21.965046   58819 retry.go:31] will retry after 1.372666662s: waiting for machine to come up
	I0610 11:48:23.339763   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:23.340265   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:23.340292   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:23.340215   58819 retry.go:31] will retry after 1.947869778s: waiting for machine to come up
	I0610 11:48:25.290089   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:25.290597   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:25.290621   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:25.290512   58819 retry.go:31] will retry after 2.456616784s: waiting for machine to come up
	I0610 11:48:27.748868   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:27.749473   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:27.749501   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:27.749414   58819 retry.go:31] will retry after 3.04818038s: waiting for machine to come up
	I0610 11:48:30.801454   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:30.801839   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | unable to find current IP address of domain old-k8s-version-166693 in network mk-old-k8s-version-166693
	I0610 11:48:30.801865   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | I0610 11:48:30.801810   58819 retry.go:31] will retry after 4.171509473s: waiting for machine to come up
	I0610 11:48:34.974603   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:34.975155   57945 main.go:141] libmachine: (old-k8s-version-166693) Found IP for machine: 192.168.72.34
	I0610 11:48:34.975179   57945 main.go:141] libmachine: (old-k8s-version-166693) Reserving static IP address...
	I0610 11:48:34.975198   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has current primary IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:34.975514   57945 main.go:141] libmachine: (old-k8s-version-166693) Reserved static IP address: 192.168.72.34
	I0610 11:48:34.975534   57945 main.go:141] libmachine: (old-k8s-version-166693) Waiting for SSH to be available...
	I0610 11:48:34.975559   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "old-k8s-version-166693", mac: "52:54:00:43:ea:f9", ip: "192.168.72.34"} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:34.975586   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | skip adding static IP to network mk-old-k8s-version-166693 - found existing host DHCP lease matching {name: "old-k8s-version-166693", mac: "52:54:00:43:ea:f9", ip: "192.168.72.34"}
	I0610 11:48:34.975603   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | Getting to WaitForSSH function...
	I0610 11:48:34.977798   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:34.978147   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:34.978181   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:34.978299   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | Using SSH client type: external
	I0610 11:48:34.978325   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa (-rw-------)
	I0610 11:48:34.978368   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 11:48:34.978389   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | About to run SSH command:
	I0610 11:48:34.978405   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | exit 0
	I0610 11:48:35.105074   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | SSH cmd err, output: <nil>: 
	I0610 11:48:35.105431   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetConfigRaw
	I0610 11:48:35.106044   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetIP
	I0610 11:48:35.108455   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.108807   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:35.108839   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.109072   57945 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/config.json ...
	I0610 11:48:35.109253   57945 machine.go:94] provisionDockerMachine start ...
	I0610 11:48:35.109270   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:48:35.109448   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:48:35.111670   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.112014   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:35.112046   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.112266   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:48:35.112459   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:35.112574   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:35.112718   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:48:35.112867   57945 main.go:141] libmachine: Using SSH client type: native
	I0610 11:48:35.113097   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0610 11:48:35.113111   57945 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:48:35.220905   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 11:48:35.220939   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetMachineName
	I0610 11:48:35.221179   57945 buildroot.go:166] provisioning hostname "old-k8s-version-166693"
	I0610 11:48:35.221200   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetMachineName
	I0610 11:48:35.221390   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:48:35.223835   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.224205   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:35.224231   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.224415   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:48:35.224602   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:35.224773   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:35.224982   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:48:35.225160   57945 main.go:141] libmachine: Using SSH client type: native
	I0610 11:48:35.225359   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0610 11:48:35.225373   57945 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-166693 && echo "old-k8s-version-166693" | sudo tee /etc/hostname
	I0610 11:48:35.347724   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-166693
	
	I0610 11:48:35.347762   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:48:35.350787   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.351197   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:35.351236   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.351411   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:48:35.351642   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:35.351821   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:35.351961   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:48:35.352121   57945 main.go:141] libmachine: Using SSH client type: native
	I0610 11:48:35.352336   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0610 11:48:35.352362   57945 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-166693' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-166693/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-166693' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:48:35.469212   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:48:35.469243   57945 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 11:48:35.469269   57945 buildroot.go:174] setting up certificates
	I0610 11:48:35.469279   57945 provision.go:84] configureAuth start
	I0610 11:48:35.469300   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetMachineName
	I0610 11:48:35.469562   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetIP
	I0610 11:48:35.472115   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.472460   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:35.472499   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.472601   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:48:35.474969   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.475319   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:35.475350   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.475514   57945 provision.go:143] copyHostCerts
	I0610 11:48:35.475592   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 11:48:35.475610   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:48:35.475683   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 11:48:35.475805   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 11:48:35.475819   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:48:35.475853   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 11:48:35.475941   57945 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 11:48:35.475953   57945 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:48:35.475981   57945 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 11:48:35.476060   57945 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-166693 san=[127.0.0.1 192.168.72.34 localhost minikube old-k8s-version-166693]
	I0610 11:48:35.608047   57945 provision.go:177] copyRemoteCerts
	I0610 11:48:35.608102   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:48:35.608128   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:48:35.610474   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.610830   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:35.610857   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.611035   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:48:35.611194   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:35.611360   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:48:35.611472   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa Username:docker}
	I0610 11:48:35.694458   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:48:35.717308   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0610 11:48:35.739169   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 11:48:35.761802   57945 provision.go:87] duration metric: took 292.510007ms to configureAuth
	I0610 11:48:35.761830   57945 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:48:35.762023   57945 config.go:182] Loaded profile config "old-k8s-version-166693": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0610 11:48:35.762104   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:48:35.764705   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.765096   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:35.765125   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:35.765294   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:48:35.765481   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:35.765634   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:35.765768   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:48:35.765913   57945 main.go:141] libmachine: Using SSH client type: native
	I0610 11:48:35.766089   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0610 11:48:35.766110   57945 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 11:48:36.048425   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 11:48:36.048454   57945 machine.go:97] duration metric: took 939.188429ms to provisionDockerMachine
	I0610 11:48:36.048466   57945 start.go:293] postStartSetup for "old-k8s-version-166693" (driver="kvm2")
	I0610 11:48:36.048486   57945 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:48:36.048525   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:48:36.048885   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:48:36.048920   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:48:36.052323   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:36.052719   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:36.052757   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:36.052878   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:48:36.053127   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:36.053317   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:48:36.053509   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa Username:docker}
	I0610 11:48:36.139839   57945 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:48:36.144108   57945 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:48:36.144141   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 11:48:36.144230   57945 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 11:48:36.144367   57945 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 11:48:36.144506   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:48:36.153430   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:48:36.177347   57945 start.go:296] duration metric: took 128.864503ms for postStartSetup
	I0610 11:48:36.177391   57945 fix.go:56] duration metric: took 19.875829809s for fixHost
	I0610 11:48:36.177414   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:48:36.180443   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:36.180853   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:36.180883   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:36.181060   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:48:36.181299   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:36.181501   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:36.181680   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:48:36.181877   57945 main.go:141] libmachine: Using SSH client type: native
	I0610 11:48:36.182101   57945 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.34 22 <nil> <nil>}
	I0610 11:48:36.182119   57945 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 11:48:36.289792   57945 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718020116.264021590
	
	I0610 11:48:36.289817   57945 fix.go:216] guest clock: 1718020116.264021590
	I0610 11:48:36.289827   57945 fix.go:229] Guest: 2024-06-10 11:48:36.26402159 +0000 UTC Remote: 2024-06-10 11:48:36.177395334 +0000 UTC m=+203.956764413 (delta=86.626256ms)
	I0610 11:48:36.289852   57945 fix.go:200] guest clock delta is within tolerance: 86.626256ms
	I0610 11:48:36.289859   57945 start.go:83] releasing machines lock for "old-k8s-version-166693", held for 19.988321569s
	I0610 11:48:36.289887   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:48:36.290178   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetIP
	I0610 11:48:36.293128   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:36.293537   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:36.293565   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:36.293762   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:48:36.294331   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:48:36.294531   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .DriverName
	I0610 11:48:36.294630   57945 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:48:36.294688   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:48:36.294767   57945 ssh_runner.go:195] Run: cat /version.json
	I0610 11:48:36.294795   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHHostname
	I0610 11:48:36.297828   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:36.298005   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:36.298238   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:36.298268   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:36.298444   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:48:36.298546   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:36.298615   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:36.298653   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:36.298717   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHPort
	I0610 11:48:36.298802   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:48:36.298968   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHKeyPath
	I0610 11:48:36.298964   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa Username:docker}
	I0610 11:48:36.299165   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetSSHUsername
	I0610 11:48:36.299304   57945 sshutil.go:53] new ssh client: &{IP:192.168.72.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/old-k8s-version-166693/id_rsa Username:docker}
	I0610 11:48:36.417303   57945 ssh_runner.go:195] Run: systemctl --version
	I0610 11:48:36.425441   57945 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 11:48:36.583117   57945 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 11:48:36.590592   57945 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:48:36.590675   57945 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:48:36.608407   57945 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:48:36.608435   57945 start.go:494] detecting cgroup driver to use...
	I0610 11:48:36.608509   57945 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:48:36.630839   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:48:36.647330   57945 docker.go:217] disabling cri-docker service (if available) ...
	I0610 11:48:36.647387   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 11:48:36.665711   57945 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 11:48:36.681046   57945 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 11:48:36.815317   57945 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 11:48:37.006726   57945 docker.go:233] disabling docker service ...
	I0610 11:48:37.006793   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 11:48:37.023215   57945 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 11:48:37.039440   57945 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 11:48:37.188258   57945 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 11:48:37.324965   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 11:48:37.348356   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:48:37.368538   57945 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0610 11:48:37.368607   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:48:37.383395   57945 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 11:48:37.383470   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:48:37.397136   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:48:37.411207   57945 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:48:37.427884   57945 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:48:37.442287   57945 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:48:37.453312   57945 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 11:48:37.453382   57945 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 11:48:37.470910   57945 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:48:37.482860   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:48:37.621539   57945 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 11:48:37.786024   57945 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 11:48:37.786101   57945 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 11:48:37.791784   57945 start.go:562] Will wait 60s for crictl version
	I0610 11:48:37.791837   57945 ssh_runner.go:195] Run: which crictl
	I0610 11:48:37.795572   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:48:37.837239   57945 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 11:48:37.837323   57945 ssh_runner.go:195] Run: crio --version
	I0610 11:48:37.870034   57945 ssh_runner.go:195] Run: crio --version
	I0610 11:48:37.901862   57945 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0610 11:48:37.903178   57945 main.go:141] libmachine: (old-k8s-version-166693) Calling .GetIP
	I0610 11:48:37.906174   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:37.906500   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:43:ea:f9", ip: ""} in network mk-old-k8s-version-166693: {Iface:virbr3 ExpiryTime:2024-06-10 12:48:26 +0000 UTC Type:0 Mac:52:54:00:43:ea:f9 Iaid: IPaddr:192.168.72.34 Prefix:24 Hostname:old-k8s-version-166693 Clientid:01:52:54:00:43:ea:f9}
	I0610 11:48:37.906530   57945 main.go:141] libmachine: (old-k8s-version-166693) DBG | domain old-k8s-version-166693 has defined IP address 192.168.72.34 and MAC address 52:54:00:43:ea:f9 in network mk-old-k8s-version-166693
	I0610 11:48:37.906779   57945 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0610 11:48:37.910993   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:48:37.924187   57945 kubeadm.go:877] updating cluster {Name:old-k8s-version-166693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-166693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.34 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:48:37.924330   57945 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0610 11:48:37.924390   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:48:37.972408   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0610 11:48:37.972472   57945 ssh_runner.go:195] Run: which lz4
	I0610 11:48:37.977522   57945 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0610 11:48:37.981938   57945 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 11:48:37.981976   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0610 11:48:39.560358   57945 crio.go:462] duration metric: took 1.582864164s to copy over tarball
	I0610 11:48:39.560443   57945 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 11:48:42.724491   57945 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.16401376s)
	I0610 11:48:42.724533   57945 crio.go:469] duration metric: took 3.164136597s to extract the tarball
	I0610 11:48:42.724544   57945 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 11:48:42.767215   57945 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:48:42.801325   57945 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0610 11:48:42.801358   57945 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 11:48:42.801449   57945 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:48:42.801460   57945 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:48:42.801492   57945 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0610 11:48:42.801478   57945 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:48:42.801549   57945 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0610 11:48:42.801568   57945 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:48:42.801661   57945 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0610 11:48:42.801455   57945 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:48:42.802956   57945 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:48:42.802967   57945 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:48:42.802967   57945 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0610 11:48:42.802956   57945 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:48:42.802963   57945 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0610 11:48:42.803020   57945 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0610 11:48:42.803023   57945 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:48:42.803038   57945 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:48:43.008415   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:48:43.029523   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0610 11:48:43.044171   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0610 11:48:43.045546   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0610 11:48:43.046666   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:48:43.047814   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:48:43.057202   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:48:43.082541   57945 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0610 11:48:43.082580   57945 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:48:43.082627   57945 ssh_runner.go:195] Run: which crictl
	I0610 11:48:43.161199   57945 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0610 11:48:43.161246   57945 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0610 11:48:43.161311   57945 ssh_runner.go:195] Run: which crictl
	I0610 11:48:43.172397   57945 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0610 11:48:43.172442   57945 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0610 11:48:43.172488   57945 ssh_runner.go:195] Run: which crictl
	I0610 11:48:43.172517   57945 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0610 11:48:43.172546   57945 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0610 11:48:43.172589   57945 ssh_runner.go:195] Run: which crictl
	I0610 11:48:43.199805   57945 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0610 11:48:43.199851   57945 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:48:43.199900   57945 ssh_runner.go:195] Run: which crictl
	I0610 11:48:43.206565   57945 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0610 11:48:43.206612   57945 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:48:43.206656   57945 ssh_runner.go:195] Run: which crictl
	I0610 11:48:43.206663   57945 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0610 11:48:43.206699   57945 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:48:43.206737   57945 ssh_runner.go:195] Run: which crictl
	I0610 11:48:43.206743   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0610 11:48:43.206760   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0610 11:48:43.206824   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0610 11:48:43.206862   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0610 11:48:43.206892   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0610 11:48:43.224823   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0610 11:48:43.314469   57945 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0610 11:48:43.314532   57945 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0610 11:48:43.314708   57945 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0610 11:48:43.349602   57945 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0610 11:48:43.349682   57945 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0610 11:48:43.349683   57945 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0610 11:48:43.350593   57945 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0610 11:48:43.373405   57945 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0610 11:48:43.648865   57945 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:48:43.819239   57945 cache_images.go:92] duration metric: took 1.017860821s to LoadCachedImages
	W0610 11:48:43.819361   57945 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/19046-3880/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0610 11:48:43.819380   57945 kubeadm.go:928] updating node { 192.168.72.34 8443 v1.20.0 crio true true} ...
	I0610 11:48:43.819516   57945 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-166693 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.34
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-166693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:48:43.819589   57945 ssh_runner.go:195] Run: crio config
	I0610 11:48:43.874139   57945 cni.go:84] Creating CNI manager for ""
	I0610 11:48:43.874168   57945 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:48:43.874204   57945 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:48:43.874228   57945 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.34 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-166693 NodeName:old-k8s-version-166693 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.34"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.34 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0610 11:48:43.874402   57945 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.34
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-166693"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.34
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.34"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:48:43.874471   57945 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0610 11:48:43.888287   57945 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:48:43.888366   57945 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 11:48:43.900850   57945 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0610 11:48:43.919051   57945 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:48:43.936695   57945 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0610 11:48:43.956999   57945 ssh_runner.go:195] Run: grep 192.168.72.34	control-plane.minikube.internal$ /etc/hosts
	I0610 11:48:43.961634   57945 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.34	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:48:43.977163   57945 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:48:44.113820   57945 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:48:44.132229   57945 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693 for IP: 192.168.72.34
	I0610 11:48:44.132265   57945 certs.go:194] generating shared ca certs ...
	I0610 11:48:44.132287   57945 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:48:44.132535   57945 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 11:48:44.132603   57945 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 11:48:44.132622   57945 certs.go:256] generating profile certs ...
	I0610 11:48:44.132762   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.key
	I0610 11:48:44.132836   57945 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.key.1a4331fb
	I0610 11:48:44.132899   57945 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/proxy-client.key
	I0610 11:48:44.133095   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 11:48:44.133141   57945 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 11:48:44.133156   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 11:48:44.133202   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 11:48:44.133239   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 11:48:44.133277   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 11:48:44.133335   57945 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:48:44.134277   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:48:44.180613   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:48:44.208862   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:48:44.233594   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 11:48:44.266533   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0610 11:48:44.304110   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 11:48:44.349803   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:48:44.387942   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:48:44.412231   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 11:48:44.436269   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:48:44.462329   57945 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 11:48:44.490527   57945 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:48:44.507667   57945 ssh_runner.go:195] Run: openssl version
	I0610 11:48:44.513199   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 11:48:44.523482   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 11:48:44.527875   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:48:44.527954   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 11:48:44.534304   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 11:48:44.545944   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 11:48:44.557278   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 11:48:44.561921   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:48:44.561999   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 11:48:44.569512   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:48:44.580284   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:48:44.592473   57945 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:48:44.597346   57945 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:48:44.597465   57945 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:48:44.602946   57945 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:48:44.614171   57945 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:48:44.619029   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 11:48:44.624856   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 11:48:44.630849   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 11:48:44.637068   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 11:48:44.644413   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 11:48:44.652764   57945 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 11:48:44.658870   57945 kubeadm.go:391] StartCluster: {Name:old-k8s-version-166693 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-166693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.34 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:48:44.658976   57945 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 11:48:44.659028   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:48:44.700144   57945 cri.go:89] found id: ""
	I0610 11:48:44.700224   57945 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0610 11:48:44.710166   57945 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 11:48:44.710195   57945 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 11:48:44.710203   57945 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 11:48:44.710253   57945 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 11:48:44.720138   57945 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 11:48:44.722308   57945 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-166693" does not appear in /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:48:44.723492   57945 kubeconfig.go:62] /home/jenkins/minikube-integration/19046-3880/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-166693" cluster setting kubeconfig missing "old-k8s-version-166693" context setting]
	I0610 11:48:44.725219   57945 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:48:44.793224   57945 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 11:48:44.804493   57945 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.34
	I0610 11:48:44.804536   57945 kubeadm.go:1154] stopping kube-system containers ...
	I0610 11:48:44.804549   57945 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0610 11:48:44.804598   57945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:48:44.848899   57945 cri.go:89] found id: ""
	I0610 11:48:44.848992   57945 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 11:48:44.866023   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:48:44.876704   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:48:44.876733   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:48:44.876792   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:48:44.886879   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:48:44.886951   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:48:44.896962   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:48:44.907183   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:48:44.907242   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:48:44.916689   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:48:44.926096   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:48:44.926166   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:48:44.936097   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:48:44.945725   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:48:44.945778   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:48:44.955396   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:48:44.965229   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:48:45.194592   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:48:45.969653   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:48:46.199771   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:48:46.311375   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:48:46.407592   57945 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:48:46.407744   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:46.908158   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:47.408614   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:47.907882   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:48.408087   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:48.907866   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:49.408513   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:49.908348   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:50.408175   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:50.908330   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:51.408520   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:51.908092   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:52.408394   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:52.908385   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:53.408228   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:53.907889   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:54.408089   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:54.908363   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:55.408397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:55.908051   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:56.408058   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:56.908080   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:57.408229   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:57.908776   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:58.408539   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:58.908740   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:59.407842   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:48:59.908752   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:00.408814   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:00.908442   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:01.408390   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:01.908297   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:02.408297   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:02.908638   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:03.408604   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:03.908372   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:04.408746   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:04.908627   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:05.408595   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:05.907808   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:06.408550   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:06.908484   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:07.408239   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:07.907948   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:08.408189   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:08.908628   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:09.407985   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:09.908172   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:10.408582   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:10.908328   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:11.408621   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:11.908740   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:12.408688   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:12.908752   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:13.408781   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:13.907961   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:14.408536   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:14.908184   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:15.408120   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:15.908328   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:16.408276   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:16.908001   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:17.408034   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:17.908525   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:18.408229   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:18.907839   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:19.408449   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:19.908793   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:20.408760   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:20.908605   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:21.408596   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:21.908615   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:22.408170   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:22.908428   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:23.407848   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:23.908428   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:24.408238   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:24.908278   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:25.408131   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:25.908823   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:26.408563   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:26.908525   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:27.407957   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:27.907861   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:28.408099   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:28.908159   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:29.408089   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:29.907856   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:30.407899   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:30.907808   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:31.408006   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:31.908277   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:32.408211   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:32.908477   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:33.408358   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:33.908192   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:34.407933   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:34.908452   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:35.408786   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:35.907793   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:36.408538   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:36.907927   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:37.408329   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:37.907857   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:38.408664   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:38.908002   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:39.408029   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:39.908618   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:40.408099   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:40.907981   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:41.408006   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:41.908687   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:42.408631   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:42.908466   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:43.408087   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:43.908732   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:44.407947   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:44.908673   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:45.408598   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:45.908210   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:46.408452   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:49:46.408537   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:49:46.446921   57945 cri.go:89] found id: ""
	I0610 11:49:46.446945   57945 logs.go:276] 0 containers: []
	W0610 11:49:46.446954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:49:46.446961   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:49:46.447024   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:49:46.484087   57945 cri.go:89] found id: ""
	I0610 11:49:46.484114   57945 logs.go:276] 0 containers: []
	W0610 11:49:46.484122   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:49:46.484127   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:49:46.484176   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:49:46.520980   57945 cri.go:89] found id: ""
	I0610 11:49:46.521007   57945 logs.go:276] 0 containers: []
	W0610 11:49:46.521018   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:49:46.521026   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:49:46.521084   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:49:46.555241   57945 cri.go:89] found id: ""
	I0610 11:49:46.555274   57945 logs.go:276] 0 containers: []
	W0610 11:49:46.555285   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:49:46.555293   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:49:46.555356   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:49:46.586926   57945 cri.go:89] found id: ""
	I0610 11:49:46.586957   57945 logs.go:276] 0 containers: []
	W0610 11:49:46.586967   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:49:46.586973   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:49:46.587027   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:49:46.626236   57945 cri.go:89] found id: ""
	I0610 11:49:46.626266   57945 logs.go:276] 0 containers: []
	W0610 11:49:46.626276   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:49:46.626283   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:49:46.626347   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:49:46.658395   57945 cri.go:89] found id: ""
	I0610 11:49:46.658418   57945 logs.go:276] 0 containers: []
	W0610 11:49:46.658426   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:49:46.658431   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:49:46.658481   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:49:46.689088   57945 cri.go:89] found id: ""
	I0610 11:49:46.689115   57945 logs.go:276] 0 containers: []
	W0610 11:49:46.689126   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:49:46.689140   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:49:46.689154   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:49:46.742435   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:49:46.742471   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:49:46.755637   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:49:46.755664   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:49:46.873361   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:49:46.873388   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:49:46.873404   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:49:46.932780   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:49:46.932817   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:49:49.471063   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:49.483138   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:49:49.483208   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:49:49.520186   57945 cri.go:89] found id: ""
	I0610 11:49:49.520213   57945 logs.go:276] 0 containers: []
	W0610 11:49:49.520221   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:49:49.520226   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:49:49.520308   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:49:49.552670   57945 cri.go:89] found id: ""
	I0610 11:49:49.552701   57945 logs.go:276] 0 containers: []
	W0610 11:49:49.552709   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:49:49.552714   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:49:49.552762   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:49:49.585359   57945 cri.go:89] found id: ""
	I0610 11:49:49.585380   57945 logs.go:276] 0 containers: []
	W0610 11:49:49.585388   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:49:49.585393   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:49:49.585437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:49:49.621213   57945 cri.go:89] found id: ""
	I0610 11:49:49.621242   57945 logs.go:276] 0 containers: []
	W0610 11:49:49.621262   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:49:49.621268   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:49:49.621322   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:49:49.655739   57945 cri.go:89] found id: ""
	I0610 11:49:49.655763   57945 logs.go:276] 0 containers: []
	W0610 11:49:49.655770   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:49:49.655775   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:49:49.655821   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:49:49.687616   57945 cri.go:89] found id: ""
	I0610 11:49:49.687643   57945 logs.go:276] 0 containers: []
	W0610 11:49:49.687651   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:49:49.687657   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:49:49.687719   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:49:49.721432   57945 cri.go:89] found id: ""
	I0610 11:49:49.721457   57945 logs.go:276] 0 containers: []
	W0610 11:49:49.721464   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:49:49.721470   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:49:49.721516   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:49:49.753312   57945 cri.go:89] found id: ""
	I0610 11:49:49.753337   57945 logs.go:276] 0 containers: []
	W0610 11:49:49.753345   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:49:49.753352   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:49:49.753366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:49:49.805116   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:49:49.805158   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:49:49.817908   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:49:49.817938   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:49:49.900634   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:49:49.900660   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:49:49.900679   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:49:49.974048   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:49:49.974087   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:49:52.514953   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:52.527309   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:49:52.527370   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:49:52.560535   57945 cri.go:89] found id: ""
	I0610 11:49:52.560560   57945 logs.go:276] 0 containers: []
	W0610 11:49:52.560568   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:49:52.560576   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:49:52.560635   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:49:52.595080   57945 cri.go:89] found id: ""
	I0610 11:49:52.595111   57945 logs.go:276] 0 containers: []
	W0610 11:49:52.595123   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:49:52.595129   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:49:52.595192   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:49:52.626765   57945 cri.go:89] found id: ""
	I0610 11:49:52.626794   57945 logs.go:276] 0 containers: []
	W0610 11:49:52.626824   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:49:52.626849   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:49:52.626916   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:49:52.659065   57945 cri.go:89] found id: ""
	I0610 11:49:52.659093   57945 logs.go:276] 0 containers: []
	W0610 11:49:52.659104   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:49:52.659111   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:49:52.659188   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:49:52.695151   57945 cri.go:89] found id: ""
	I0610 11:49:52.695174   57945 logs.go:276] 0 containers: []
	W0610 11:49:52.695184   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:49:52.695192   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:49:52.695250   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:49:52.730510   57945 cri.go:89] found id: ""
	I0610 11:49:52.730540   57945 logs.go:276] 0 containers: []
	W0610 11:49:52.730551   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:49:52.730559   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:49:52.730608   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:49:52.761871   57945 cri.go:89] found id: ""
	I0610 11:49:52.761897   57945 logs.go:276] 0 containers: []
	W0610 11:49:52.761904   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:49:52.761910   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:49:52.761987   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:49:52.795964   57945 cri.go:89] found id: ""
	I0610 11:49:52.795998   57945 logs.go:276] 0 containers: []
	W0610 11:49:52.796016   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:49:52.796028   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:49:52.796044   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:49:52.863174   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:49:52.863222   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:49:52.863241   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:49:52.933619   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:49:52.933669   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:49:52.970975   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:49:52.971002   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:49:53.019445   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:49:53.019478   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:49:55.533988   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:55.547504   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:49:55.547563   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:49:55.581676   57945 cri.go:89] found id: ""
	I0610 11:49:55.581704   57945 logs.go:276] 0 containers: []
	W0610 11:49:55.581712   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:49:55.581720   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:49:55.581780   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:49:55.617229   57945 cri.go:89] found id: ""
	I0610 11:49:55.617260   57945 logs.go:276] 0 containers: []
	W0610 11:49:55.617269   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:49:55.617275   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:49:55.617333   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:49:55.682254   57945 cri.go:89] found id: ""
	I0610 11:49:55.682346   57945 logs.go:276] 0 containers: []
	W0610 11:49:55.682362   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:49:55.682370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:49:55.682434   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:49:55.731545   57945 cri.go:89] found id: ""
	I0610 11:49:55.731573   57945 logs.go:276] 0 containers: []
	W0610 11:49:55.731585   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:49:55.731591   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:49:55.731650   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:49:55.767187   57945 cri.go:89] found id: ""
	I0610 11:49:55.767223   57945 logs.go:276] 0 containers: []
	W0610 11:49:55.767235   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:49:55.767242   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:49:55.767309   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:49:55.803793   57945 cri.go:89] found id: ""
	I0610 11:49:55.803821   57945 logs.go:276] 0 containers: []
	W0610 11:49:55.803829   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:49:55.803834   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:49:55.803883   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:49:55.836517   57945 cri.go:89] found id: ""
	I0610 11:49:55.836544   57945 logs.go:276] 0 containers: []
	W0610 11:49:55.836551   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:49:55.836557   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:49:55.836616   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:49:55.871228   57945 cri.go:89] found id: ""
	I0610 11:49:55.871259   57945 logs.go:276] 0 containers: []
	W0610 11:49:55.871271   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:49:55.871282   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:49:55.871296   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:49:55.924258   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:49:55.924296   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:49:55.937734   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:49:55.937762   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:49:56.005556   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:49:56.005575   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:49:56.005587   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:49:56.081605   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:49:56.081640   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:49:58.624676   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:49:58.639924   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:49:58.639999   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:49:58.674071   57945 cri.go:89] found id: ""
	I0610 11:49:58.674100   57945 logs.go:276] 0 containers: []
	W0610 11:49:58.674108   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:49:58.674114   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:49:58.674175   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:49:58.712499   57945 cri.go:89] found id: ""
	I0610 11:49:58.712526   57945 logs.go:276] 0 containers: []
	W0610 11:49:58.712536   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:49:58.712543   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:49:58.712603   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:49:58.756783   57945 cri.go:89] found id: ""
	I0610 11:49:58.756865   57945 logs.go:276] 0 containers: []
	W0610 11:49:58.756882   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:49:58.756889   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:49:58.756978   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:49:58.791264   57945 cri.go:89] found id: ""
	I0610 11:49:58.791288   57945 logs.go:276] 0 containers: []
	W0610 11:49:58.791295   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:49:58.791301   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:49:58.791361   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:49:58.824703   57945 cri.go:89] found id: ""
	I0610 11:49:58.824733   57945 logs.go:276] 0 containers: []
	W0610 11:49:58.824761   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:49:58.824768   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:49:58.824827   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:49:58.856816   57945 cri.go:89] found id: ""
	I0610 11:49:58.856846   57945 logs.go:276] 0 containers: []
	W0610 11:49:58.856856   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:49:58.856864   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:49:58.856944   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:49:58.890262   57945 cri.go:89] found id: ""
	I0610 11:49:58.890285   57945 logs.go:276] 0 containers: []
	W0610 11:49:58.890296   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:49:58.890303   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:49:58.890367   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:49:58.924794   57945 cri.go:89] found id: ""
	I0610 11:49:58.924822   57945 logs.go:276] 0 containers: []
	W0610 11:49:58.924831   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:49:58.924845   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:49:58.924859   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:49:58.979943   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:49:58.979990   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:49:58.993388   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:49:58.993420   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:49:59.067250   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:49:59.067273   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:49:59.067294   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:49:59.144495   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:49:59.144530   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:01.681071   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:01.695508   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:01.695578   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:01.729137   57945 cri.go:89] found id: ""
	I0610 11:50:01.729165   57945 logs.go:276] 0 containers: []
	W0610 11:50:01.729173   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:01.729179   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:01.729248   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:01.760841   57945 cri.go:89] found id: ""
	I0610 11:50:01.760871   57945 logs.go:276] 0 containers: []
	W0610 11:50:01.760883   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:01.760890   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:01.760974   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:01.795799   57945 cri.go:89] found id: ""
	I0610 11:50:01.795832   57945 logs.go:276] 0 containers: []
	W0610 11:50:01.795843   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:01.795851   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:01.795901   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:01.828398   57945 cri.go:89] found id: ""
	I0610 11:50:01.828427   57945 logs.go:276] 0 containers: []
	W0610 11:50:01.828438   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:01.828446   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:01.828506   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:01.868938   57945 cri.go:89] found id: ""
	I0610 11:50:01.868989   57945 logs.go:276] 0 containers: []
	W0610 11:50:01.869000   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:01.869007   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:01.869078   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:01.903778   57945 cri.go:89] found id: ""
	I0610 11:50:01.903808   57945 logs.go:276] 0 containers: []
	W0610 11:50:01.903818   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:01.903825   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:01.903883   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:01.938775   57945 cri.go:89] found id: ""
	I0610 11:50:01.938807   57945 logs.go:276] 0 containers: []
	W0610 11:50:01.938818   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:01.938826   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:01.938889   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:01.972010   57945 cri.go:89] found id: ""
	I0610 11:50:01.972036   57945 logs.go:276] 0 containers: []
	W0610 11:50:01.972044   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:01.972053   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:01.972064   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:02.022164   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:02.022200   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:02.035759   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:02.035797   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:02.104045   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:02.104070   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:02.104086   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:02.178197   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:02.178233   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:04.716631   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:04.729903   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:04.730000   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:04.764043   57945 cri.go:89] found id: ""
	I0610 11:50:04.764071   57945 logs.go:276] 0 containers: []
	W0610 11:50:04.764081   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:04.764089   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:04.764152   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:04.798588   57945 cri.go:89] found id: ""
	I0610 11:50:04.798614   57945 logs.go:276] 0 containers: []
	W0610 11:50:04.798621   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:04.798627   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:04.798672   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:04.834952   57945 cri.go:89] found id: ""
	I0610 11:50:04.834997   57945 logs.go:276] 0 containers: []
	W0610 11:50:04.835009   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:04.835017   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:04.835095   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:04.868350   57945 cri.go:89] found id: ""
	I0610 11:50:04.868379   57945 logs.go:276] 0 containers: []
	W0610 11:50:04.868390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:04.868397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:04.868459   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:04.905063   57945 cri.go:89] found id: ""
	I0610 11:50:04.905093   57945 logs.go:276] 0 containers: []
	W0610 11:50:04.905104   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:04.905111   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:04.905172   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:04.939805   57945 cri.go:89] found id: ""
	I0610 11:50:04.939832   57945 logs.go:276] 0 containers: []
	W0610 11:50:04.939840   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:04.939848   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:04.939911   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:04.973136   57945 cri.go:89] found id: ""
	I0610 11:50:04.973160   57945 logs.go:276] 0 containers: []
	W0610 11:50:04.973167   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:04.973173   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:04.973219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:05.007517   57945 cri.go:89] found id: ""
	I0610 11:50:05.007541   57945 logs.go:276] 0 containers: []
	W0610 11:50:05.007548   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:05.007556   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:05.007567   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:05.085004   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:05.085054   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:05.126823   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:05.126850   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:05.179965   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:05.180003   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:05.193626   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:05.193672   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:05.268941   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:07.769696   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:07.782139   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:07.782218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:07.813822   57945 cri.go:89] found id: ""
	I0610 11:50:07.813852   57945 logs.go:276] 0 containers: []
	W0610 11:50:07.813862   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:07.813872   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:07.813932   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:07.847553   57945 cri.go:89] found id: ""
	I0610 11:50:07.847584   57945 logs.go:276] 0 containers: []
	W0610 11:50:07.847592   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:07.847598   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:07.847646   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:07.882845   57945 cri.go:89] found id: ""
	I0610 11:50:07.882879   57945 logs.go:276] 0 containers: []
	W0610 11:50:07.882890   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:07.882897   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:07.882968   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:07.914971   57945 cri.go:89] found id: ""
	I0610 11:50:07.915007   57945 logs.go:276] 0 containers: []
	W0610 11:50:07.915020   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:07.915029   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:07.915092   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:07.947949   57945 cri.go:89] found id: ""
	I0610 11:50:07.947978   57945 logs.go:276] 0 containers: []
	W0610 11:50:07.947986   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:07.947992   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:07.948059   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:07.980414   57945 cri.go:89] found id: ""
	I0610 11:50:07.980441   57945 logs.go:276] 0 containers: []
	W0610 11:50:07.980452   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:07.980460   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:07.980525   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:08.012345   57945 cri.go:89] found id: ""
	I0610 11:50:08.012373   57945 logs.go:276] 0 containers: []
	W0610 11:50:08.012381   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:08.012395   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:08.012455   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:08.046202   57945 cri.go:89] found id: ""
	I0610 11:50:08.046227   57945 logs.go:276] 0 containers: []
	W0610 11:50:08.046236   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:08.046245   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:08.046260   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:08.099354   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:08.099390   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:08.112905   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:08.112934   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:08.182782   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:08.182806   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:08.182820   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:08.262780   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:08.262824   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:10.799496   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:10.813140   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:10.813228   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:10.846447   57945 cri.go:89] found id: ""
	I0610 11:50:10.846472   57945 logs.go:276] 0 containers: []
	W0610 11:50:10.846479   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:10.846485   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:10.846536   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:10.879121   57945 cri.go:89] found id: ""
	I0610 11:50:10.879149   57945 logs.go:276] 0 containers: []
	W0610 11:50:10.879157   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:10.879164   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:10.879218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:10.912862   57945 cri.go:89] found id: ""
	I0610 11:50:10.912889   57945 logs.go:276] 0 containers: []
	W0610 11:50:10.912897   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:10.912902   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:10.912968   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:10.949770   57945 cri.go:89] found id: ""
	I0610 11:50:10.949794   57945 logs.go:276] 0 containers: []
	W0610 11:50:10.949802   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:10.949807   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:10.949857   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:10.984630   57945 cri.go:89] found id: ""
	I0610 11:50:10.984658   57945 logs.go:276] 0 containers: []
	W0610 11:50:10.984665   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:10.984671   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:10.984720   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:11.016145   57945 cri.go:89] found id: ""
	I0610 11:50:11.016175   57945 logs.go:276] 0 containers: []
	W0610 11:50:11.016184   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:11.016189   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:11.016244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:11.054111   57945 cri.go:89] found id: ""
	I0610 11:50:11.054143   57945 logs.go:276] 0 containers: []
	W0610 11:50:11.054154   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:11.054162   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:11.054238   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:11.087687   57945 cri.go:89] found id: ""
	I0610 11:50:11.087719   57945 logs.go:276] 0 containers: []
	W0610 11:50:11.087729   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:11.087740   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:11.087763   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:11.101683   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:11.101715   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:11.173474   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:11.173507   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:11.173535   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:11.261703   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:11.261747   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:11.301988   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:11.302023   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:13.857417   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:13.870219   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:13.870303   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:13.905689   57945 cri.go:89] found id: ""
	I0610 11:50:13.905721   57945 logs.go:276] 0 containers: []
	W0610 11:50:13.905732   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:13.905740   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:13.905807   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:13.937627   57945 cri.go:89] found id: ""
	I0610 11:50:13.937655   57945 logs.go:276] 0 containers: []
	W0610 11:50:13.937665   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:13.937671   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:13.937731   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:13.971739   57945 cri.go:89] found id: ""
	I0610 11:50:13.971774   57945 logs.go:276] 0 containers: []
	W0610 11:50:13.971785   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:13.971793   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:13.971856   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:14.015155   57945 cri.go:89] found id: ""
	I0610 11:50:14.015186   57945 logs.go:276] 0 containers: []
	W0610 11:50:14.015196   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:14.015203   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:14.015273   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:14.051356   57945 cri.go:89] found id: ""
	I0610 11:50:14.051387   57945 logs.go:276] 0 containers: []
	W0610 11:50:14.051397   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:14.051404   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:14.051463   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:14.083436   57945 cri.go:89] found id: ""
	I0610 11:50:14.083467   57945 logs.go:276] 0 containers: []
	W0610 11:50:14.083475   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:14.083482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:14.083542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:14.114848   57945 cri.go:89] found id: ""
	I0610 11:50:14.114887   57945 logs.go:276] 0 containers: []
	W0610 11:50:14.114895   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:14.114901   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:14.114950   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:14.146657   57945 cri.go:89] found id: ""
	I0610 11:50:14.146688   57945 logs.go:276] 0 containers: []
	W0610 11:50:14.146700   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:14.146711   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:14.146726   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:14.197559   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:14.197590   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:14.210729   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:14.210759   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:14.279694   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:14.279720   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:14.279737   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:14.356374   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:14.356410   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:16.895220   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:16.908902   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:16.909011   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:16.944101   57945 cri.go:89] found id: ""
	I0610 11:50:16.944131   57945 logs.go:276] 0 containers: []
	W0610 11:50:16.944143   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:16.944150   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:16.944210   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:16.977679   57945 cri.go:89] found id: ""
	I0610 11:50:16.977709   57945 logs.go:276] 0 containers: []
	W0610 11:50:16.977720   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:16.977727   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:16.977814   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:17.011534   57945 cri.go:89] found id: ""
	I0610 11:50:17.011558   57945 logs.go:276] 0 containers: []
	W0610 11:50:17.011565   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:17.011571   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:17.011619   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:17.048241   57945 cri.go:89] found id: ""
	I0610 11:50:17.048267   57945 logs.go:276] 0 containers: []
	W0610 11:50:17.048276   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:17.048283   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:17.048329   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:17.080935   57945 cri.go:89] found id: ""
	I0610 11:50:17.080981   57945 logs.go:276] 0 containers: []
	W0610 11:50:17.080992   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:17.081014   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:17.081068   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:17.112889   57945 cri.go:89] found id: ""
	I0610 11:50:17.112927   57945 logs.go:276] 0 containers: []
	W0610 11:50:17.112938   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:17.112956   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:17.113019   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:17.143871   57945 cri.go:89] found id: ""
	I0610 11:50:17.143900   57945 logs.go:276] 0 containers: []
	W0610 11:50:17.143910   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:17.143918   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:17.143973   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:17.177396   57945 cri.go:89] found id: ""
	I0610 11:50:17.177429   57945 logs.go:276] 0 containers: []
	W0610 11:50:17.177440   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:17.177450   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:17.177469   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:17.258284   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:17.258319   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:17.295486   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:17.295516   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:17.349717   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:17.349753   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:17.362864   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:17.362893   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:17.432114   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:19.932297   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:19.947432   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:19.947497   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:19.982159   57945 cri.go:89] found id: ""
	I0610 11:50:19.982186   57945 logs.go:276] 0 containers: []
	W0610 11:50:19.982194   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:19.982200   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:19.982255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:20.013915   57945 cri.go:89] found id: ""
	I0610 11:50:20.013946   57945 logs.go:276] 0 containers: []
	W0610 11:50:20.013957   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:20.013965   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:20.014031   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:20.049528   57945 cri.go:89] found id: ""
	I0610 11:50:20.049552   57945 logs.go:276] 0 containers: []
	W0610 11:50:20.049559   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:20.049565   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:20.049620   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:20.080964   57945 cri.go:89] found id: ""
	I0610 11:50:20.081001   57945 logs.go:276] 0 containers: []
	W0610 11:50:20.081021   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:20.081037   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:20.081105   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:20.112801   57945 cri.go:89] found id: ""
	I0610 11:50:20.112830   57945 logs.go:276] 0 containers: []
	W0610 11:50:20.112838   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:20.112843   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:20.112895   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:20.144126   57945 cri.go:89] found id: ""
	I0610 11:50:20.144157   57945 logs.go:276] 0 containers: []
	W0610 11:50:20.144169   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:20.144177   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:20.144239   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:20.178963   57945 cri.go:89] found id: ""
	I0610 11:50:20.178988   57945 logs.go:276] 0 containers: []
	W0610 11:50:20.178995   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:20.179001   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:20.179114   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:20.214029   57945 cri.go:89] found id: ""
	I0610 11:50:20.214053   57945 logs.go:276] 0 containers: []
	W0610 11:50:20.214063   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:20.214073   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:20.214088   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:20.264782   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:20.264816   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:20.277965   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:20.277992   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:20.344986   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:20.345007   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:20.345024   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:20.422239   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:20.422274   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:22.961659   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:22.974306   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:22.974368   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:23.006840   57945 cri.go:89] found id: ""
	I0610 11:50:23.006860   57945 logs.go:276] 0 containers: []
	W0610 11:50:23.006867   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:23.006872   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:23.006916   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:23.044539   57945 cri.go:89] found id: ""
	I0610 11:50:23.044570   57945 logs.go:276] 0 containers: []
	W0610 11:50:23.044581   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:23.044588   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:23.044648   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:23.076343   57945 cri.go:89] found id: ""
	I0610 11:50:23.076371   57945 logs.go:276] 0 containers: []
	W0610 11:50:23.076379   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:23.076385   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:23.076447   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:23.109567   57945 cri.go:89] found id: ""
	I0610 11:50:23.109595   57945 logs.go:276] 0 containers: []
	W0610 11:50:23.109607   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:23.109615   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:23.109682   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:23.142261   57945 cri.go:89] found id: ""
	I0610 11:50:23.142286   57945 logs.go:276] 0 containers: []
	W0610 11:50:23.142294   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:23.142300   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:23.142361   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:23.173631   57945 cri.go:89] found id: ""
	I0610 11:50:23.173664   57945 logs.go:276] 0 containers: []
	W0610 11:50:23.173675   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:23.173682   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:23.173750   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:23.206910   57945 cri.go:89] found id: ""
	I0610 11:50:23.206939   57945 logs.go:276] 0 containers: []
	W0610 11:50:23.206950   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:23.206958   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:23.207028   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:23.239164   57945 cri.go:89] found id: ""
	I0610 11:50:23.239188   57945 logs.go:276] 0 containers: []
	W0610 11:50:23.239195   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:23.239204   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:23.239216   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:23.252537   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:23.252563   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:23.321735   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:23.321758   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:23.321773   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:23.402374   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:23.402409   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:23.440419   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:23.440445   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:25.987589   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:26.000480   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:26.000542   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:26.034930   57945 cri.go:89] found id: ""
	I0610 11:50:26.034960   57945 logs.go:276] 0 containers: []
	W0610 11:50:26.034969   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:26.034975   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:26.035027   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:26.070037   57945 cri.go:89] found id: ""
	I0610 11:50:26.070069   57945 logs.go:276] 0 containers: []
	W0610 11:50:26.070077   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:26.070089   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:26.070139   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:26.102411   57945 cri.go:89] found id: ""
	I0610 11:50:26.102445   57945 logs.go:276] 0 containers: []
	W0610 11:50:26.102454   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:26.102469   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:26.102528   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:26.136606   57945 cri.go:89] found id: ""
	I0610 11:50:26.136634   57945 logs.go:276] 0 containers: []
	W0610 11:50:26.136642   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:26.136649   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:26.136710   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:26.168901   57945 cri.go:89] found id: ""
	I0610 11:50:26.168934   57945 logs.go:276] 0 containers: []
	W0610 11:50:26.168941   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:26.168959   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:26.169021   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:26.200831   57945 cri.go:89] found id: ""
	I0610 11:50:26.200855   57945 logs.go:276] 0 containers: []
	W0610 11:50:26.200862   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:26.200868   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:26.200927   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:26.233229   57945 cri.go:89] found id: ""
	I0610 11:50:26.233258   57945 logs.go:276] 0 containers: []
	W0610 11:50:26.233269   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:26.233277   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:26.233342   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:26.265968   57945 cri.go:89] found id: ""
	I0610 11:50:26.265992   57945 logs.go:276] 0 containers: []
	W0610 11:50:26.266001   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:26.266011   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:26.266022   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:26.279303   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:26.279333   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:26.344962   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:26.344989   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:26.345006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:26.417301   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:26.417334   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:26.454875   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:26.454909   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:29.005067   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:29.017675   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:29.017741   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:29.051760   57945 cri.go:89] found id: ""
	I0610 11:50:29.051791   57945 logs.go:276] 0 containers: []
	W0610 11:50:29.051801   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:29.051808   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:29.051874   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:29.083396   57945 cri.go:89] found id: ""
	I0610 11:50:29.083426   57945 logs.go:276] 0 containers: []
	W0610 11:50:29.083434   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:29.083440   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:29.083504   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:29.115265   57945 cri.go:89] found id: ""
	I0610 11:50:29.115301   57945 logs.go:276] 0 containers: []
	W0610 11:50:29.115313   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:29.115325   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:29.115387   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:29.146314   57945 cri.go:89] found id: ""
	I0610 11:50:29.146346   57945 logs.go:276] 0 containers: []
	W0610 11:50:29.146357   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:29.146367   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:29.146434   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:29.178922   57945 cri.go:89] found id: ""
	I0610 11:50:29.178955   57945 logs.go:276] 0 containers: []
	W0610 11:50:29.178967   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:29.178974   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:29.179046   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:29.212190   57945 cri.go:89] found id: ""
	I0610 11:50:29.212215   57945 logs.go:276] 0 containers: []
	W0610 11:50:29.212222   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:29.212228   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:29.212292   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:29.244740   57945 cri.go:89] found id: ""
	I0610 11:50:29.244770   57945 logs.go:276] 0 containers: []
	W0610 11:50:29.244781   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:29.244789   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:29.244850   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:29.276398   57945 cri.go:89] found id: ""
	I0610 11:50:29.276426   57945 logs.go:276] 0 containers: []
	W0610 11:50:29.276436   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:29.276447   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:29.276462   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:29.289662   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:29.289691   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:29.358700   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:29.358729   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:29.358748   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:29.433529   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:29.433574   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:29.489588   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:29.489621   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:32.046065   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:32.059129   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:32.059201   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:32.092191   57945 cri.go:89] found id: ""
	I0610 11:50:32.092217   57945 logs.go:276] 0 containers: []
	W0610 11:50:32.092225   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:32.092230   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:32.092297   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:32.131018   57945 cri.go:89] found id: ""
	I0610 11:50:32.131049   57945 logs.go:276] 0 containers: []
	W0610 11:50:32.131065   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:32.131073   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:32.131138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:32.167205   57945 cri.go:89] found id: ""
	I0610 11:50:32.167241   57945 logs.go:276] 0 containers: []
	W0610 11:50:32.167251   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:32.167258   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:32.167307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:32.202317   57945 cri.go:89] found id: ""
	I0610 11:50:32.202349   57945 logs.go:276] 0 containers: []
	W0610 11:50:32.202358   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:32.202363   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:32.202427   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:32.236862   57945 cri.go:89] found id: ""
	I0610 11:50:32.236893   57945 logs.go:276] 0 containers: []
	W0610 11:50:32.236900   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:32.236906   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:32.236973   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:32.269102   57945 cri.go:89] found id: ""
	I0610 11:50:32.269136   57945 logs.go:276] 0 containers: []
	W0610 11:50:32.269147   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:32.269155   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:32.269205   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:32.300756   57945 cri.go:89] found id: ""
	I0610 11:50:32.300797   57945 logs.go:276] 0 containers: []
	W0610 11:50:32.300817   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:32.300824   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:32.300885   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:32.334287   57945 cri.go:89] found id: ""
	I0610 11:50:32.334321   57945 logs.go:276] 0 containers: []
	W0610 11:50:32.334331   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:32.334339   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:32.334350   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:32.385195   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:32.385230   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:32.397782   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:32.397808   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:32.475610   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:32.475635   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:32.475649   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:32.552453   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:32.552487   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:35.092829   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:35.105969   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:35.106055   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:35.143215   57945 cri.go:89] found id: ""
	I0610 11:50:35.143242   57945 logs.go:276] 0 containers: []
	W0610 11:50:35.143250   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:35.143256   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:35.143315   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:35.179569   57945 cri.go:89] found id: ""
	I0610 11:50:35.179597   57945 logs.go:276] 0 containers: []
	W0610 11:50:35.179605   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:35.179610   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:35.179663   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:35.233050   57945 cri.go:89] found id: ""
	I0610 11:50:35.233079   57945 logs.go:276] 0 containers: []
	W0610 11:50:35.233089   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:35.233094   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:35.233146   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:35.266072   57945 cri.go:89] found id: ""
	I0610 11:50:35.266097   57945 logs.go:276] 0 containers: []
	W0610 11:50:35.266107   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:35.266115   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:35.266182   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:35.297919   57945 cri.go:89] found id: ""
	I0610 11:50:35.297953   57945 logs.go:276] 0 containers: []
	W0610 11:50:35.297967   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:35.297974   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:35.298039   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:35.329830   57945 cri.go:89] found id: ""
	I0610 11:50:35.329860   57945 logs.go:276] 0 containers: []
	W0610 11:50:35.329869   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:35.329877   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:35.329945   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:35.362172   57945 cri.go:89] found id: ""
	I0610 11:50:35.362198   57945 logs.go:276] 0 containers: []
	W0610 11:50:35.362207   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:35.362217   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:35.362278   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:35.394025   57945 cri.go:89] found id: ""
	I0610 11:50:35.394051   57945 logs.go:276] 0 containers: []
	W0610 11:50:35.394060   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:35.394073   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:35.394088   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:35.455971   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:35.456000   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:35.470573   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:35.470596   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:35.537946   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:35.537970   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:35.537983   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:35.614854   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:35.614892   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:38.152190   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:38.164977   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:38.165042   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:38.202617   57945 cri.go:89] found id: ""
	I0610 11:50:38.202641   57945 logs.go:276] 0 containers: []
	W0610 11:50:38.202648   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:38.202654   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:38.202701   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:38.239778   57945 cri.go:89] found id: ""
	I0610 11:50:38.239804   57945 logs.go:276] 0 containers: []
	W0610 11:50:38.239811   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:38.239816   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:38.239869   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:38.277589   57945 cri.go:89] found id: ""
	I0610 11:50:38.277614   57945 logs.go:276] 0 containers: []
	W0610 11:50:38.277621   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:38.277626   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:38.277675   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:38.311090   57945 cri.go:89] found id: ""
	I0610 11:50:38.311114   57945 logs.go:276] 0 containers: []
	W0610 11:50:38.311122   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:38.311129   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:38.311175   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:38.346779   57945 cri.go:89] found id: ""
	I0610 11:50:38.346806   57945 logs.go:276] 0 containers: []
	W0610 11:50:38.346814   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:38.346820   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:38.346869   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:38.386233   57945 cri.go:89] found id: ""
	I0610 11:50:38.386263   57945 logs.go:276] 0 containers: []
	W0610 11:50:38.386275   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:38.386283   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:38.386344   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:38.427287   57945 cri.go:89] found id: ""
	I0610 11:50:38.427316   57945 logs.go:276] 0 containers: []
	W0610 11:50:38.427328   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:38.427335   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:38.427398   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:38.464908   57945 cri.go:89] found id: ""
	I0610 11:50:38.464930   57945 logs.go:276] 0 containers: []
	W0610 11:50:38.464937   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:38.464958   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:38.464969   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:38.518721   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:38.518755   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:38.533478   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:38.533509   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:38.595246   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:38.595275   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:38.595291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:38.676388   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:38.676420   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:41.216456   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:41.228680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:41.228744   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:41.265512   57945 cri.go:89] found id: ""
	I0610 11:50:41.265537   57945 logs.go:276] 0 containers: []
	W0610 11:50:41.265546   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:41.265553   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:41.265616   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:41.299351   57945 cri.go:89] found id: ""
	I0610 11:50:41.299376   57945 logs.go:276] 0 containers: []
	W0610 11:50:41.299384   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:41.299391   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:41.299451   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:41.332015   57945 cri.go:89] found id: ""
	I0610 11:50:41.332042   57945 logs.go:276] 0 containers: []
	W0610 11:50:41.332051   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:41.332059   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:41.332121   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:41.365279   57945 cri.go:89] found id: ""
	I0610 11:50:41.365308   57945 logs.go:276] 0 containers: []
	W0610 11:50:41.365316   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:41.365321   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:41.365386   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:41.398294   57945 cri.go:89] found id: ""
	I0610 11:50:41.398327   57945 logs.go:276] 0 containers: []
	W0610 11:50:41.398339   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:41.398346   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:41.398411   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:41.430244   57945 cri.go:89] found id: ""
	I0610 11:50:41.430276   57945 logs.go:276] 0 containers: []
	W0610 11:50:41.430287   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:41.430294   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:41.430355   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:41.467359   57945 cri.go:89] found id: ""
	I0610 11:50:41.467385   57945 logs.go:276] 0 containers: []
	W0610 11:50:41.467395   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:41.467401   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:41.467456   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:41.500765   57945 cri.go:89] found id: ""
	I0610 11:50:41.500791   57945 logs.go:276] 0 containers: []
	W0610 11:50:41.500799   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:41.500808   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:41.500823   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:41.513890   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:41.513917   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:41.583987   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:41.584034   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:41.584053   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:41.667552   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:41.667596   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:41.718053   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:41.718085   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:44.272061   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:44.284898   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:44.284983   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:44.318320   57945 cri.go:89] found id: ""
	I0610 11:50:44.318349   57945 logs.go:276] 0 containers: []
	W0610 11:50:44.318361   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:44.318368   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:44.318425   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:44.350937   57945 cri.go:89] found id: ""
	I0610 11:50:44.350968   57945 logs.go:276] 0 containers: []
	W0610 11:50:44.350976   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:44.350981   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:44.351036   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:44.383189   57945 cri.go:89] found id: ""
	I0610 11:50:44.383219   57945 logs.go:276] 0 containers: []
	W0610 11:50:44.383230   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:44.383238   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:44.383290   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:44.419479   57945 cri.go:89] found id: ""
	I0610 11:50:44.419511   57945 logs.go:276] 0 containers: []
	W0610 11:50:44.419522   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:44.419529   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:44.419591   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:44.452378   57945 cri.go:89] found id: ""
	I0610 11:50:44.452406   57945 logs.go:276] 0 containers: []
	W0610 11:50:44.452418   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:44.452426   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:44.452480   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:44.485809   57945 cri.go:89] found id: ""
	I0610 11:50:44.485840   57945 logs.go:276] 0 containers: []
	W0610 11:50:44.485851   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:44.485859   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:44.485918   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:44.517951   57945 cri.go:89] found id: ""
	I0610 11:50:44.517988   57945 logs.go:276] 0 containers: []
	W0610 11:50:44.518006   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:44.518014   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:44.518066   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:44.549854   57945 cri.go:89] found id: ""
	I0610 11:50:44.549882   57945 logs.go:276] 0 containers: []
	W0610 11:50:44.549892   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:44.549903   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:44.549918   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:44.601979   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:44.602014   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:44.614946   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:44.614973   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:44.679475   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:44.679495   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:44.679510   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:44.756872   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:44.756914   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:47.295484   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:47.308879   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:47.308975   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:47.346311   57945 cri.go:89] found id: ""
	I0610 11:50:47.346340   57945 logs.go:276] 0 containers: []
	W0610 11:50:47.346351   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:47.346359   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:47.346421   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:47.387118   57945 cri.go:89] found id: ""
	I0610 11:50:47.387146   57945 logs.go:276] 0 containers: []
	W0610 11:50:47.387154   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:47.387159   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:47.387222   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:47.429209   57945 cri.go:89] found id: ""
	I0610 11:50:47.429239   57945 logs.go:276] 0 containers: []
	W0610 11:50:47.429257   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:47.429265   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:47.429315   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:47.477845   57945 cri.go:89] found id: ""
	I0610 11:50:47.477869   57945 logs.go:276] 0 containers: []
	W0610 11:50:47.477876   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:47.477881   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:47.477928   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:47.514000   57945 cri.go:89] found id: ""
	I0610 11:50:47.514031   57945 logs.go:276] 0 containers: []
	W0610 11:50:47.514042   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:47.514048   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:47.514108   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:47.549069   57945 cri.go:89] found id: ""
	I0610 11:50:47.549099   57945 logs.go:276] 0 containers: []
	W0610 11:50:47.549109   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:47.549117   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:47.549177   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:47.583156   57945 cri.go:89] found id: ""
	I0610 11:50:47.583199   57945 logs.go:276] 0 containers: []
	W0610 11:50:47.583210   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:47.583216   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:47.583278   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:47.616966   57945 cri.go:89] found id: ""
	I0610 11:50:47.616997   57945 logs.go:276] 0 containers: []
	W0610 11:50:47.617007   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:47.617018   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:47.617033   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:47.664877   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:47.664927   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:47.677923   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:47.677958   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:47.744598   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:47.744623   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:47.744640   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:47.827633   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:47.827673   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:50.374606   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:50.386754   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:50.386832   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:50.417002   57945 cri.go:89] found id: ""
	I0610 11:50:50.417036   57945 logs.go:276] 0 containers: []
	W0610 11:50:50.417048   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:50.417056   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:50.417113   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:50.450868   57945 cri.go:89] found id: ""
	I0610 11:50:50.450894   57945 logs.go:276] 0 containers: []
	W0610 11:50:50.450905   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:50.450913   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:50.450970   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:50.483179   57945 cri.go:89] found id: ""
	I0610 11:50:50.483207   57945 logs.go:276] 0 containers: []
	W0610 11:50:50.483215   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:50.483221   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:50.483295   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:50.515195   57945 cri.go:89] found id: ""
	I0610 11:50:50.515222   57945 logs.go:276] 0 containers: []
	W0610 11:50:50.515229   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:50.515235   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:50.515288   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:50.547336   57945 cri.go:89] found id: ""
	I0610 11:50:50.547358   57945 logs.go:276] 0 containers: []
	W0610 11:50:50.547366   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:50.547371   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:50.547419   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:50.579243   57945 cri.go:89] found id: ""
	I0610 11:50:50.579280   57945 logs.go:276] 0 containers: []
	W0610 11:50:50.579291   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:50.579299   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:50.579361   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:50.613781   57945 cri.go:89] found id: ""
	I0610 11:50:50.613810   57945 logs.go:276] 0 containers: []
	W0610 11:50:50.613821   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:50.613828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:50.613883   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:50.644667   57945 cri.go:89] found id: ""
	I0610 11:50:50.644696   57945 logs.go:276] 0 containers: []
	W0610 11:50:50.644707   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:50.644723   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:50.644741   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:50.698408   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:50.698449   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:50.712794   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:50.712826   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:50.775104   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:50.775134   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:50.775147   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:50.852856   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:50.852895   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:53.391323   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:53.404007   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:53.404086   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:53.440938   57945 cri.go:89] found id: ""
	I0610 11:50:53.440983   57945 logs.go:276] 0 containers: []
	W0610 11:50:53.440993   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:53.441001   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:53.441085   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:53.478507   57945 cri.go:89] found id: ""
	I0610 11:50:53.478535   57945 logs.go:276] 0 containers: []
	W0610 11:50:53.478543   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:53.478548   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:53.478604   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:53.513205   57945 cri.go:89] found id: ""
	I0610 11:50:53.513240   57945 logs.go:276] 0 containers: []
	W0610 11:50:53.513265   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:53.513274   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:53.513353   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:53.551646   57945 cri.go:89] found id: ""
	I0610 11:50:53.551670   57945 logs.go:276] 0 containers: []
	W0610 11:50:53.551680   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:53.551689   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:53.551747   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:53.594391   57945 cri.go:89] found id: ""
	I0610 11:50:53.594422   57945 logs.go:276] 0 containers: []
	W0610 11:50:53.594434   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:53.594442   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:53.594501   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:53.628067   57945 cri.go:89] found id: ""
	I0610 11:50:53.628101   57945 logs.go:276] 0 containers: []
	W0610 11:50:53.628111   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:53.628118   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:53.628168   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:53.661807   57945 cri.go:89] found id: ""
	I0610 11:50:53.661833   57945 logs.go:276] 0 containers: []
	W0610 11:50:53.661840   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:53.661846   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:53.661901   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:53.695820   57945 cri.go:89] found id: ""
	I0610 11:50:53.695844   57945 logs.go:276] 0 containers: []
	W0610 11:50:53.695851   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:53.695858   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:53.695870   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:53.763628   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:53.763648   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:53.763662   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:53.849101   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:53.849147   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:53.888778   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:53.888806   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:53.942467   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:53.942498   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:56.458212   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:56.470507   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:56.470569   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:56.501864   57945 cri.go:89] found id: ""
	I0610 11:50:56.501895   57945 logs.go:276] 0 containers: []
	W0610 11:50:56.501906   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:56.501915   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:56.501976   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:56.533939   57945 cri.go:89] found id: ""
	I0610 11:50:56.533961   57945 logs.go:276] 0 containers: []
	W0610 11:50:56.533971   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:56.533978   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:56.534036   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:56.566287   57945 cri.go:89] found id: ""
	I0610 11:50:56.566318   57945 logs.go:276] 0 containers: []
	W0610 11:50:56.566327   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:56.566333   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:56.566381   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:56.598404   57945 cri.go:89] found id: ""
	I0610 11:50:56.598435   57945 logs.go:276] 0 containers: []
	W0610 11:50:56.598446   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:56.598455   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:56.598525   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:56.630730   57945 cri.go:89] found id: ""
	I0610 11:50:56.630755   57945 logs.go:276] 0 containers: []
	W0610 11:50:56.630763   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:56.630768   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:56.630828   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:56.662593   57945 cri.go:89] found id: ""
	I0610 11:50:56.662623   57945 logs.go:276] 0 containers: []
	W0610 11:50:56.662633   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:56.662640   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:56.662700   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:56.698008   57945 cri.go:89] found id: ""
	I0610 11:50:56.698041   57945 logs.go:276] 0 containers: []
	W0610 11:50:56.698053   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:56.698061   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:56.698118   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:56.732346   57945 cri.go:89] found id: ""
	I0610 11:50:56.732376   57945 logs.go:276] 0 containers: []
	W0610 11:50:56.732389   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:56.732401   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:56.732415   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:50:56.807290   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:56.807324   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:56.843753   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:56.843784   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:56.893950   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:56.893983   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:56.906908   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:56.906941   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:56.975955   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:59.476933   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:50:59.489931   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:50:59.489985   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:50:59.520866   57945 cri.go:89] found id: ""
	I0610 11:50:59.520892   57945 logs.go:276] 0 containers: []
	W0610 11:50:59.520900   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:50:59.520906   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:50:59.520966   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:50:59.555380   57945 cri.go:89] found id: ""
	I0610 11:50:59.555407   57945 logs.go:276] 0 containers: []
	W0610 11:50:59.555415   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:50:59.555421   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:50:59.555479   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:50:59.591764   57945 cri.go:89] found id: ""
	I0610 11:50:59.591787   57945 logs.go:276] 0 containers: []
	W0610 11:50:59.591795   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:50:59.591801   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:50:59.591860   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:50:59.628087   57945 cri.go:89] found id: ""
	I0610 11:50:59.628116   57945 logs.go:276] 0 containers: []
	W0610 11:50:59.628127   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:50:59.628134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:50:59.628199   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:50:59.660667   57945 cri.go:89] found id: ""
	I0610 11:50:59.660695   57945 logs.go:276] 0 containers: []
	W0610 11:50:59.660705   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:50:59.660712   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:50:59.660774   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:50:59.693357   57945 cri.go:89] found id: ""
	I0610 11:50:59.693383   57945 logs.go:276] 0 containers: []
	W0610 11:50:59.693392   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:50:59.693403   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:50:59.693458   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:50:59.726149   57945 cri.go:89] found id: ""
	I0610 11:50:59.726181   57945 logs.go:276] 0 containers: []
	W0610 11:50:59.726190   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:50:59.726203   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:50:59.726272   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:50:59.766063   57945 cri.go:89] found id: ""
	I0610 11:50:59.766091   57945 logs.go:276] 0 containers: []
	W0610 11:50:59.766101   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:50:59.766109   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:50:59.766122   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:50:59.805746   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:50:59.805773   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:50:59.854304   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:50:59.854335   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:50:59.867018   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:50:59.867060   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:50:59.936527   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:50:59.936552   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:50:59.936566   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:02.515960   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:02.528381   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:02.528444   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:02.560257   57945 cri.go:89] found id: ""
	I0610 11:51:02.560280   57945 logs.go:276] 0 containers: []
	W0610 11:51:02.560288   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:02.560293   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:02.560351   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:02.597613   57945 cri.go:89] found id: ""
	I0610 11:51:02.597637   57945 logs.go:276] 0 containers: []
	W0610 11:51:02.597645   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:02.597650   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:02.597737   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:02.630384   57945 cri.go:89] found id: ""
	I0610 11:51:02.630415   57945 logs.go:276] 0 containers: []
	W0610 11:51:02.630426   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:02.630433   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:02.630491   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:02.662120   57945 cri.go:89] found id: ""
	I0610 11:51:02.662144   57945 logs.go:276] 0 containers: []
	W0610 11:51:02.662155   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:02.662163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:02.662227   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:02.695790   57945 cri.go:89] found id: ""
	I0610 11:51:02.695818   57945 logs.go:276] 0 containers: []
	W0610 11:51:02.695833   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:02.695841   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:02.695906   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:02.729841   57945 cri.go:89] found id: ""
	I0610 11:51:02.729868   57945 logs.go:276] 0 containers: []
	W0610 11:51:02.729875   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:02.729881   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:02.729927   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:02.766766   57945 cri.go:89] found id: ""
	I0610 11:51:02.766793   57945 logs.go:276] 0 containers: []
	W0610 11:51:02.766804   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:02.766812   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:02.766875   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:02.799880   57945 cri.go:89] found id: ""
	I0610 11:51:02.799907   57945 logs.go:276] 0 containers: []
	W0610 11:51:02.799916   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:02.799926   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:02.799942   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:02.849802   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:02.849834   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:02.865021   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:02.865053   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:02.933896   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:02.933915   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:02.933927   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:03.018162   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:03.018201   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:05.565245   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:05.578076   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:05.578143   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:05.611591   57945 cri.go:89] found id: ""
	I0610 11:51:05.611620   57945 logs.go:276] 0 containers: []
	W0610 11:51:05.611630   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:05.611638   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:05.611702   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:05.646552   57945 cri.go:89] found id: ""
	I0610 11:51:05.646576   57945 logs.go:276] 0 containers: []
	W0610 11:51:05.646584   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:05.646590   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:05.646656   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:05.678348   57945 cri.go:89] found id: ""
	I0610 11:51:05.678375   57945 logs.go:276] 0 containers: []
	W0610 11:51:05.678383   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:05.678388   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:05.678450   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:05.716002   57945 cri.go:89] found id: ""
	I0610 11:51:05.716023   57945 logs.go:276] 0 containers: []
	W0610 11:51:05.716030   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:05.716036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:05.716103   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:05.751040   57945 cri.go:89] found id: ""
	I0610 11:51:05.751064   57945 logs.go:276] 0 containers: []
	W0610 11:51:05.751071   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:05.751077   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:05.751142   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:05.784714   57945 cri.go:89] found id: ""
	I0610 11:51:05.784741   57945 logs.go:276] 0 containers: []
	W0610 11:51:05.784752   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:05.784766   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:05.784829   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:05.823883   57945 cri.go:89] found id: ""
	I0610 11:51:05.823911   57945 logs.go:276] 0 containers: []
	W0610 11:51:05.823921   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:05.823928   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:05.823993   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:05.855858   57945 cri.go:89] found id: ""
	I0610 11:51:05.855890   57945 logs.go:276] 0 containers: []
	W0610 11:51:05.855902   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:05.855912   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:05.855925   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:05.906061   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:05.906103   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:05.919762   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:05.919795   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:05.990427   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:05.990450   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:05.990463   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:06.069428   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:06.069469   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:08.613641   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:08.625981   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:08.626042   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:08.657748   57945 cri.go:89] found id: ""
	I0610 11:51:08.657778   57945 logs.go:276] 0 containers: []
	W0610 11:51:08.657789   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:08.657797   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:08.657860   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:08.690784   57945 cri.go:89] found id: ""
	I0610 11:51:08.690811   57945 logs.go:276] 0 containers: []
	W0610 11:51:08.690822   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:08.690829   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:08.690887   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:08.723869   57945 cri.go:89] found id: ""
	I0610 11:51:08.723896   57945 logs.go:276] 0 containers: []
	W0610 11:51:08.723905   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:08.723910   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:08.723970   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:08.760188   57945 cri.go:89] found id: ""
	I0610 11:51:08.760223   57945 logs.go:276] 0 containers: []
	W0610 11:51:08.760235   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:08.760242   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:08.760301   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:08.791848   57945 cri.go:89] found id: ""
	I0610 11:51:08.791877   57945 logs.go:276] 0 containers: []
	W0610 11:51:08.791884   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:08.791889   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:08.791938   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:08.824692   57945 cri.go:89] found id: ""
	I0610 11:51:08.824719   57945 logs.go:276] 0 containers: []
	W0610 11:51:08.824728   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:08.824734   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:08.824784   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:08.858103   57945 cri.go:89] found id: ""
	I0610 11:51:08.858134   57945 logs.go:276] 0 containers: []
	W0610 11:51:08.858146   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:08.858153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:08.858214   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:08.890371   57945 cri.go:89] found id: ""
	I0610 11:51:08.890394   57945 logs.go:276] 0 containers: []
	W0610 11:51:08.890401   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:08.890409   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:08.890422   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:08.967737   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:08.967776   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:09.005676   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:09.005703   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:09.064319   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:09.064362   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:09.079575   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:09.079607   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:09.147663   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:11.648928   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:11.663189   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:11.663246   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:11.697336   57945 cri.go:89] found id: ""
	I0610 11:51:11.697363   57945 logs.go:276] 0 containers: []
	W0610 11:51:11.697372   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:11.697380   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:11.697436   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:11.736328   57945 cri.go:89] found id: ""
	I0610 11:51:11.736354   57945 logs.go:276] 0 containers: []
	W0610 11:51:11.736367   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:11.736372   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:11.736436   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:11.776523   57945 cri.go:89] found id: ""
	I0610 11:51:11.776549   57945 logs.go:276] 0 containers: []
	W0610 11:51:11.776559   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:11.776567   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:11.776635   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:11.812069   57945 cri.go:89] found id: ""
	I0610 11:51:11.812096   57945 logs.go:276] 0 containers: []
	W0610 11:51:11.812108   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:11.812116   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:11.812169   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:11.845250   57945 cri.go:89] found id: ""
	I0610 11:51:11.845274   57945 logs.go:276] 0 containers: []
	W0610 11:51:11.845282   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:11.845288   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:11.845348   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:11.877291   57945 cri.go:89] found id: ""
	I0610 11:51:11.877318   57945 logs.go:276] 0 containers: []
	W0610 11:51:11.877326   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:11.877331   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:11.877377   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:11.915126   57945 cri.go:89] found id: ""
	I0610 11:51:11.915159   57945 logs.go:276] 0 containers: []
	W0610 11:51:11.915170   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:11.915176   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:11.915232   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:11.950425   57945 cri.go:89] found id: ""
	I0610 11:51:11.950453   57945 logs.go:276] 0 containers: []
	W0610 11:51:11.950463   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:11.950474   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:11.950489   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:12.029761   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:12.029800   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:12.066477   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:12.066504   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:12.116861   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:12.116907   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:12.130040   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:12.130073   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:12.198970   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:14.699296   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:14.712250   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:14.712343   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:14.745483   57945 cri.go:89] found id: ""
	I0610 11:51:14.745510   57945 logs.go:276] 0 containers: []
	W0610 11:51:14.745519   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:14.745527   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:14.745588   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:14.777749   57945 cri.go:89] found id: ""
	I0610 11:51:14.777779   57945 logs.go:276] 0 containers: []
	W0610 11:51:14.777789   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:14.777797   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:14.777859   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:14.809404   57945 cri.go:89] found id: ""
	I0610 11:51:14.809433   57945 logs.go:276] 0 containers: []
	W0610 11:51:14.809444   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:14.809454   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:14.809517   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:14.840774   57945 cri.go:89] found id: ""
	I0610 11:51:14.840799   57945 logs.go:276] 0 containers: []
	W0610 11:51:14.840809   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:14.840816   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:14.840884   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:14.871691   57945 cri.go:89] found id: ""
	I0610 11:51:14.871718   57945 logs.go:276] 0 containers: []
	W0610 11:51:14.871725   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:14.871731   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:14.871817   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:14.903806   57945 cri.go:89] found id: ""
	I0610 11:51:14.903831   57945 logs.go:276] 0 containers: []
	W0610 11:51:14.903841   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:14.903849   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:14.903910   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:14.937228   57945 cri.go:89] found id: ""
	I0610 11:51:14.937257   57945 logs.go:276] 0 containers: []
	W0610 11:51:14.937267   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:14.937275   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:14.937332   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:14.971051   57945 cri.go:89] found id: ""
	I0610 11:51:14.971075   57945 logs.go:276] 0 containers: []
	W0610 11:51:14.971082   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:14.971090   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:14.971101   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:15.022957   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:15.023001   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:15.035681   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:15.035711   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:15.102611   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:15.102638   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:15.102657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:15.176359   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:15.176403   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:17.718392   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:17.732352   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:17.732404   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:17.769883   57945 cri.go:89] found id: ""
	I0610 11:51:17.769911   57945 logs.go:276] 0 containers: []
	W0610 11:51:17.769921   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:17.769931   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:17.770008   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:17.802776   57945 cri.go:89] found id: ""
	I0610 11:51:17.802802   57945 logs.go:276] 0 containers: []
	W0610 11:51:17.802809   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:17.802814   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:17.802867   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:17.839247   57945 cri.go:89] found id: ""
	I0610 11:51:17.839284   57945 logs.go:276] 0 containers: []
	W0610 11:51:17.839296   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:17.839303   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:17.839358   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:17.872677   57945 cri.go:89] found id: ""
	I0610 11:51:17.872703   57945 logs.go:276] 0 containers: []
	W0610 11:51:17.872711   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:17.872717   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:17.872769   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:17.903997   57945 cri.go:89] found id: ""
	I0610 11:51:17.904029   57945 logs.go:276] 0 containers: []
	W0610 11:51:17.904040   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:17.904047   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:17.904106   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:17.936515   57945 cri.go:89] found id: ""
	I0610 11:51:17.936544   57945 logs.go:276] 0 containers: []
	W0610 11:51:17.936553   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:17.936560   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:17.936619   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:17.975583   57945 cri.go:89] found id: ""
	I0610 11:51:17.975617   57945 logs.go:276] 0 containers: []
	W0610 11:51:17.975628   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:17.975636   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:17.975717   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:18.010606   57945 cri.go:89] found id: ""
	I0610 11:51:18.010631   57945 logs.go:276] 0 containers: []
	W0610 11:51:18.010638   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:18.010647   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:18.010670   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:18.063479   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:18.063517   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:18.076957   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:18.076992   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:18.148044   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:18.148071   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:18.148086   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:18.228939   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:18.228996   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:20.767004   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:20.779304   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:20.779381   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:20.811556   57945 cri.go:89] found id: ""
	I0610 11:51:20.811583   57945 logs.go:276] 0 containers: []
	W0610 11:51:20.811593   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:20.811600   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:20.811665   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:20.843813   57945 cri.go:89] found id: ""
	I0610 11:51:20.843845   57945 logs.go:276] 0 containers: []
	W0610 11:51:20.843855   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:20.843863   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:20.843914   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:20.874412   57945 cri.go:89] found id: ""
	I0610 11:51:20.874447   57945 logs.go:276] 0 containers: []
	W0610 11:51:20.874459   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:20.874467   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:20.874529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:20.912007   57945 cri.go:89] found id: ""
	I0610 11:51:20.912051   57945 logs.go:276] 0 containers: []
	W0610 11:51:20.912062   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:20.912071   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:20.912121   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:20.943481   57945 cri.go:89] found id: ""
	I0610 11:51:20.943512   57945 logs.go:276] 0 containers: []
	W0610 11:51:20.943522   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:20.943529   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:20.943592   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:20.974580   57945 cri.go:89] found id: ""
	I0610 11:51:20.974603   57945 logs.go:276] 0 containers: []
	W0610 11:51:20.974610   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:20.974616   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:20.974671   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:21.007640   57945 cri.go:89] found id: ""
	I0610 11:51:21.007669   57945 logs.go:276] 0 containers: []
	W0610 11:51:21.007679   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:21.007686   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:21.007749   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:21.044350   57945 cri.go:89] found id: ""
	I0610 11:51:21.044372   57945 logs.go:276] 0 containers: []
	W0610 11:51:21.044380   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:21.044388   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:21.044400   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:21.093486   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:21.093518   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:21.106303   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:21.106331   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:21.170613   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:21.170633   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:21.170646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:21.249047   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:21.249086   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:23.787363   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:23.801172   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:23.801265   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:23.836469   57945 cri.go:89] found id: ""
	I0610 11:51:23.836500   57945 logs.go:276] 0 containers: []
	W0610 11:51:23.836510   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:23.836517   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:23.836576   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:23.868040   57945 cri.go:89] found id: ""
	I0610 11:51:23.868069   57945 logs.go:276] 0 containers: []
	W0610 11:51:23.868091   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:23.868098   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:23.868162   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:23.899116   57945 cri.go:89] found id: ""
	I0610 11:51:23.899140   57945 logs.go:276] 0 containers: []
	W0610 11:51:23.899147   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:23.899153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:23.899205   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:23.929541   57945 cri.go:89] found id: ""
	I0610 11:51:23.929564   57945 logs.go:276] 0 containers: []
	W0610 11:51:23.929571   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:23.929576   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:23.929628   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:23.962653   57945 cri.go:89] found id: ""
	I0610 11:51:23.962679   57945 logs.go:276] 0 containers: []
	W0610 11:51:23.962687   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:23.962693   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:23.962746   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:23.995363   57945 cri.go:89] found id: ""
	I0610 11:51:23.995391   57945 logs.go:276] 0 containers: []
	W0610 11:51:23.995402   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:23.995410   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:23.995476   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:24.029875   57945 cri.go:89] found id: ""
	I0610 11:51:24.029906   57945 logs.go:276] 0 containers: []
	W0610 11:51:24.029916   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:24.029924   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:24.030004   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:24.060884   57945 cri.go:89] found id: ""
	I0610 11:51:24.060915   57945 logs.go:276] 0 containers: []
	W0610 11:51:24.060925   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:24.060936   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:24.060968   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:24.111545   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:24.111578   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:24.125704   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:24.125734   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:24.202687   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:24.202711   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:24.202728   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:24.285599   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:24.285633   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:26.826052   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:26.840424   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:26.840498   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:26.873929   57945 cri.go:89] found id: ""
	I0610 11:51:26.873952   57945 logs.go:276] 0 containers: []
	W0610 11:51:26.873959   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:26.873965   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:26.874027   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:26.906033   57945 cri.go:89] found id: ""
	I0610 11:51:26.906057   57945 logs.go:276] 0 containers: []
	W0610 11:51:26.906065   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:26.906070   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:26.906119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:26.939413   57945 cri.go:89] found id: ""
	I0610 11:51:26.939443   57945 logs.go:276] 0 containers: []
	W0610 11:51:26.939453   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:26.939461   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:26.939521   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:26.983325   57945 cri.go:89] found id: ""
	I0610 11:51:26.983350   57945 logs.go:276] 0 containers: []
	W0610 11:51:26.983357   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:26.983363   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:26.983420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:27.015766   57945 cri.go:89] found id: ""
	I0610 11:51:27.015798   57945 logs.go:276] 0 containers: []
	W0610 11:51:27.015806   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:27.015812   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:27.015871   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:27.059150   57945 cri.go:89] found id: ""
	I0610 11:51:27.059181   57945 logs.go:276] 0 containers: []
	W0610 11:51:27.059190   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:27.059195   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:27.059246   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:27.093417   57945 cri.go:89] found id: ""
	I0610 11:51:27.093451   57945 logs.go:276] 0 containers: []
	W0610 11:51:27.093462   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:27.093468   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:27.093553   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:27.131755   57945 cri.go:89] found id: ""
	I0610 11:51:27.131784   57945 logs.go:276] 0 containers: []
	W0610 11:51:27.131792   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:27.131800   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:27.131811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:27.207737   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:27.207774   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:27.245218   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:27.245246   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:27.294706   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:27.294741   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:27.307430   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:27.307454   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:27.372741   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:29.873837   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:29.887341   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:29.887416   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:29.923043   57945 cri.go:89] found id: ""
	I0610 11:51:29.923068   57945 logs.go:276] 0 containers: []
	W0610 11:51:29.923076   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:29.923082   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:29.923129   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:29.961621   57945 cri.go:89] found id: ""
	I0610 11:51:29.961644   57945 logs.go:276] 0 containers: []
	W0610 11:51:29.961651   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:29.961657   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:29.961713   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:29.997903   57945 cri.go:89] found id: ""
	I0610 11:51:29.997927   57945 logs.go:276] 0 containers: []
	W0610 11:51:29.997935   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:29.997941   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:29.997994   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:30.034915   57945 cri.go:89] found id: ""
	I0610 11:51:30.034944   57945 logs.go:276] 0 containers: []
	W0610 11:51:30.034952   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:30.034958   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:30.035015   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:30.067944   57945 cri.go:89] found id: ""
	I0610 11:51:30.067975   57945 logs.go:276] 0 containers: []
	W0610 11:51:30.067987   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:30.067994   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:30.068058   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:30.099606   57945 cri.go:89] found id: ""
	I0610 11:51:30.099638   57945 logs.go:276] 0 containers: []
	W0610 11:51:30.099649   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:30.099656   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:30.099718   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:30.132499   57945 cri.go:89] found id: ""
	I0610 11:51:30.132525   57945 logs.go:276] 0 containers: []
	W0610 11:51:30.132533   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:30.132538   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:30.132601   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:30.163240   57945 cri.go:89] found id: ""
	I0610 11:51:30.163264   57945 logs.go:276] 0 containers: []
	W0610 11:51:30.163272   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:30.163280   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:30.163291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:30.214005   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:30.214039   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:30.227121   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:30.227147   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:30.295044   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:30.295064   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:30.295079   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:30.372213   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:30.372249   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:32.914045   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:32.927650   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:32.927728   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:32.961255   57945 cri.go:89] found id: ""
	I0610 11:51:32.961286   57945 logs.go:276] 0 containers: []
	W0610 11:51:32.961296   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:32.961302   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:32.961363   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:32.992103   57945 cri.go:89] found id: ""
	I0610 11:51:32.992133   57945 logs.go:276] 0 containers: []
	W0610 11:51:32.992144   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:32.992151   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:32.992212   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:33.026059   57945 cri.go:89] found id: ""
	I0610 11:51:33.026087   57945 logs.go:276] 0 containers: []
	W0610 11:51:33.026094   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:33.026100   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:33.026150   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:33.057350   57945 cri.go:89] found id: ""
	I0610 11:51:33.057383   57945 logs.go:276] 0 containers: []
	W0610 11:51:33.057393   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:33.057398   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:33.057446   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:33.090069   57945 cri.go:89] found id: ""
	I0610 11:51:33.090095   57945 logs.go:276] 0 containers: []
	W0610 11:51:33.090103   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:33.090109   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:33.090163   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:33.123151   57945 cri.go:89] found id: ""
	I0610 11:51:33.123174   57945 logs.go:276] 0 containers: []
	W0610 11:51:33.123185   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:33.123193   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:33.123250   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:33.156453   57945 cri.go:89] found id: ""
	I0610 11:51:33.156482   57945 logs.go:276] 0 containers: []
	W0610 11:51:33.156490   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:33.156495   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:33.156543   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:33.188832   57945 cri.go:89] found id: ""
	I0610 11:51:33.188864   57945 logs.go:276] 0 containers: []
	W0610 11:51:33.188874   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:33.188885   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:33.188900   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:33.241206   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:33.241242   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:33.254841   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:33.254865   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:33.322972   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:33.322994   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:33.323006   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:33.408709   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:33.408748   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:35.955441   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:35.969427   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:35.969487   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:36.007105   57945 cri.go:89] found id: ""
	I0610 11:51:36.007128   57945 logs.go:276] 0 containers: []
	W0610 11:51:36.007138   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:36.007146   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:36.007201   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:36.043689   57945 cri.go:89] found id: ""
	I0610 11:51:36.043715   57945 logs.go:276] 0 containers: []
	W0610 11:51:36.043726   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:36.043733   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:36.043787   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:36.074126   57945 cri.go:89] found id: ""
	I0610 11:51:36.074155   57945 logs.go:276] 0 containers: []
	W0610 11:51:36.074164   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:36.074169   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:36.074218   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:36.104592   57945 cri.go:89] found id: ""
	I0610 11:51:36.104621   57945 logs.go:276] 0 containers: []
	W0610 11:51:36.104630   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:36.104635   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:36.104697   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:36.135029   57945 cri.go:89] found id: ""
	I0610 11:51:36.135055   57945 logs.go:276] 0 containers: []
	W0610 11:51:36.135065   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:36.135073   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:36.135136   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:36.175304   57945 cri.go:89] found id: ""
	I0610 11:51:36.175326   57945 logs.go:276] 0 containers: []
	W0610 11:51:36.175335   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:36.175343   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:36.175405   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:36.211706   57945 cri.go:89] found id: ""
	I0610 11:51:36.211731   57945 logs.go:276] 0 containers: []
	W0610 11:51:36.211741   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:36.211749   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:36.211806   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:36.248132   57945 cri.go:89] found id: ""
	I0610 11:51:36.248164   57945 logs.go:276] 0 containers: []
	W0610 11:51:36.248174   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:36.248183   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:36.248194   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:36.330941   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:36.330998   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:36.368259   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:36.368288   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:36.420771   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:36.420811   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:36.433636   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:36.433664   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:36.507895   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:39.008164   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:39.022000   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:39.022079   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:39.061316   57945 cri.go:89] found id: ""
	I0610 11:51:39.061343   57945 logs.go:276] 0 containers: []
	W0610 11:51:39.061351   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:39.061356   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:39.061411   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:39.108632   57945 cri.go:89] found id: ""
	I0610 11:51:39.108660   57945 logs.go:276] 0 containers: []
	W0610 11:51:39.108668   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:39.108674   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:39.108731   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:39.150603   57945 cri.go:89] found id: ""
	I0610 11:51:39.150638   57945 logs.go:276] 0 containers: []
	W0610 11:51:39.150649   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:39.150658   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:39.150722   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:39.196360   57945 cri.go:89] found id: ""
	I0610 11:51:39.196385   57945 logs.go:276] 0 containers: []
	W0610 11:51:39.196392   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:39.196397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:39.196460   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:39.244599   57945 cri.go:89] found id: ""
	I0610 11:51:39.244627   57945 logs.go:276] 0 containers: []
	W0610 11:51:39.244637   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:39.244645   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:39.244706   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:39.279977   57945 cri.go:89] found id: ""
	I0610 11:51:39.280004   57945 logs.go:276] 0 containers: []
	W0610 11:51:39.280013   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:39.280030   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:39.280096   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:39.312221   57945 cri.go:89] found id: ""
	I0610 11:51:39.312259   57945 logs.go:276] 0 containers: []
	W0610 11:51:39.312272   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:39.312280   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:39.312386   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:39.345588   57945 cri.go:89] found id: ""
	I0610 11:51:39.345616   57945 logs.go:276] 0 containers: []
	W0610 11:51:39.345624   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:39.345632   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:39.345645   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:39.422551   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:39.422587   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:39.459554   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:39.459583   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:39.508429   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:39.508462   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:39.521454   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:39.521484   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:39.585790   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:42.086606   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:42.100245   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:42.100320   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:42.135685   57945 cri.go:89] found id: ""
	I0610 11:51:42.135713   57945 logs.go:276] 0 containers: []
	W0610 11:51:42.135723   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:42.135731   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:42.135793   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:42.174975   57945 cri.go:89] found id: ""
	I0610 11:51:42.175006   57945 logs.go:276] 0 containers: []
	W0610 11:51:42.175016   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:42.175023   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:42.175086   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:42.209086   57945 cri.go:89] found id: ""
	I0610 11:51:42.209117   57945 logs.go:276] 0 containers: []
	W0610 11:51:42.209127   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:42.209135   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:42.209196   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:42.240706   57945 cri.go:89] found id: ""
	I0610 11:51:42.240737   57945 logs.go:276] 0 containers: []
	W0610 11:51:42.240748   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:42.240756   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:42.240810   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:42.271896   57945 cri.go:89] found id: ""
	I0610 11:51:42.271925   57945 logs.go:276] 0 containers: []
	W0610 11:51:42.271937   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:42.271944   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:42.272001   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:42.305651   57945 cri.go:89] found id: ""
	I0610 11:51:42.305683   57945 logs.go:276] 0 containers: []
	W0610 11:51:42.305691   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:42.305696   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:42.305743   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:42.337873   57945 cri.go:89] found id: ""
	I0610 11:51:42.337901   57945 logs.go:276] 0 containers: []
	W0610 11:51:42.337909   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:42.337915   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:42.337976   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:42.368569   57945 cri.go:89] found id: ""
	I0610 11:51:42.368598   57945 logs.go:276] 0 containers: []
	W0610 11:51:42.368609   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:42.368617   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:42.368628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:42.420039   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:42.420074   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:42.432920   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:42.432975   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:42.506064   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:42.506084   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:42.506096   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:42.585163   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:42.585197   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:45.129527   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:45.142190   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:45.142250   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:45.174078   57945 cri.go:89] found id: ""
	I0610 11:51:45.174107   57945 logs.go:276] 0 containers: []
	W0610 11:51:45.174118   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:45.174126   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:45.174186   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:45.211098   57945 cri.go:89] found id: ""
	I0610 11:51:45.211127   57945 logs.go:276] 0 containers: []
	W0610 11:51:45.211135   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:45.211141   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:45.211191   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:45.248748   57945 cri.go:89] found id: ""
	I0610 11:51:45.248782   57945 logs.go:276] 0 containers: []
	W0610 11:51:45.248793   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:45.248799   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:45.248861   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:45.288904   57945 cri.go:89] found id: ""
	I0610 11:51:45.288929   57945 logs.go:276] 0 containers: []
	W0610 11:51:45.288964   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:45.288974   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:45.289034   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:45.321170   57945 cri.go:89] found id: ""
	I0610 11:51:45.321202   57945 logs.go:276] 0 containers: []
	W0610 11:51:45.321212   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:45.321218   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:45.321282   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:45.353836   57945 cri.go:89] found id: ""
	I0610 11:51:45.353858   57945 logs.go:276] 0 containers: []
	W0610 11:51:45.353866   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:45.353871   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:45.353948   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:45.387550   57945 cri.go:89] found id: ""
	I0610 11:51:45.387590   57945 logs.go:276] 0 containers: []
	W0610 11:51:45.387606   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:45.387613   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:45.387663   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:45.417624   57945 cri.go:89] found id: ""
	I0610 11:51:45.417656   57945 logs.go:276] 0 containers: []
	W0610 11:51:45.417667   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:45.417679   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:45.417694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:45.499038   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:45.499075   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:45.535641   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:45.535673   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:45.587022   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:45.587081   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:45.600489   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:45.600518   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:45.670326   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:48.171167   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:48.184478   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:48.184540   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:48.229617   57945 cri.go:89] found id: ""
	I0610 11:51:48.229643   57945 logs.go:276] 0 containers: []
	W0610 11:51:48.229653   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:48.229661   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:48.229733   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:48.263106   57945 cri.go:89] found id: ""
	I0610 11:51:48.263134   57945 logs.go:276] 0 containers: []
	W0610 11:51:48.263143   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:48.263150   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:48.263223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:48.294056   57945 cri.go:89] found id: ""
	I0610 11:51:48.294088   57945 logs.go:276] 0 containers: []
	W0610 11:51:48.294100   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:48.294108   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:48.294171   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:48.325596   57945 cri.go:89] found id: ""
	I0610 11:51:48.325629   57945 logs.go:276] 0 containers: []
	W0610 11:51:48.325639   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:48.325646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:48.325706   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:48.356301   57945 cri.go:89] found id: ""
	I0610 11:51:48.356338   57945 logs.go:276] 0 containers: []
	W0610 11:51:48.356349   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:48.356357   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:48.356418   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:48.388153   57945 cri.go:89] found id: ""
	I0610 11:51:48.388188   57945 logs.go:276] 0 containers: []
	W0610 11:51:48.388200   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:48.388208   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:48.388269   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:48.419287   57945 cri.go:89] found id: ""
	I0610 11:51:48.419318   57945 logs.go:276] 0 containers: []
	W0610 11:51:48.419328   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:48.419337   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:48.419400   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:48.453974   57945 cri.go:89] found id: ""
	I0610 11:51:48.454004   57945 logs.go:276] 0 containers: []
	W0610 11:51:48.454013   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:48.454022   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:48.454034   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:48.496671   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:48.496701   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:48.550207   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:48.550244   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:48.564232   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:48.564259   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:48.633763   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:48.633788   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:48.633801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:51.220623   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:51.235123   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:51.235185   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:51.272971   57945 cri.go:89] found id: ""
	I0610 11:51:51.273000   57945 logs.go:276] 0 containers: []
	W0610 11:51:51.273011   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:51.273020   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:51.273082   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:51.306542   57945 cri.go:89] found id: ""
	I0610 11:51:51.306569   57945 logs.go:276] 0 containers: []
	W0610 11:51:51.306576   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:51.306581   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:51.306630   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:51.339514   57945 cri.go:89] found id: ""
	I0610 11:51:51.339542   57945 logs.go:276] 0 containers: []
	W0610 11:51:51.339551   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:51.339557   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:51.339601   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:51.374090   57945 cri.go:89] found id: ""
	I0610 11:51:51.374116   57945 logs.go:276] 0 containers: []
	W0610 11:51:51.374126   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:51.374134   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:51.374195   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:51.406269   57945 cri.go:89] found id: ""
	I0610 11:51:51.406299   57945 logs.go:276] 0 containers: []
	W0610 11:51:51.406310   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:51.406318   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:51.406384   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:51.438125   57945 cri.go:89] found id: ""
	I0610 11:51:51.438154   57945 logs.go:276] 0 containers: []
	W0610 11:51:51.438172   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:51.438181   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:51.438240   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:51.470672   57945 cri.go:89] found id: ""
	I0610 11:51:51.470703   57945 logs.go:276] 0 containers: []
	W0610 11:51:51.470714   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:51.470721   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:51.470768   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:51.503987   57945 cri.go:89] found id: ""
	I0610 11:51:51.504025   57945 logs.go:276] 0 containers: []
	W0610 11:51:51.504035   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:51.504046   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:51.504063   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:51.553162   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:51.553198   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:51.565962   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:51.565996   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:51.631686   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:51.631715   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:51.631736   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:51.707834   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:51.707867   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:54.248038   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:54.261302   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:54.261375   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:54.293194   57945 cri.go:89] found id: ""
	I0610 11:51:54.293228   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.293240   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:54.293247   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:54.293307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:54.326656   57945 cri.go:89] found id: ""
	I0610 11:51:54.326687   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.326699   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:54.326707   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:54.326764   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:54.359330   57945 cri.go:89] found id: ""
	I0610 11:51:54.359365   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.359378   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:54.359386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:54.359450   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:54.391520   57945 cri.go:89] found id: ""
	I0610 11:51:54.391549   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.391558   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:54.391565   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:54.391642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:54.426803   57945 cri.go:89] found id: ""
	I0610 11:51:54.426840   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.426850   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:54.426860   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:54.426936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:54.462618   57945 cri.go:89] found id: ""
	I0610 11:51:54.462645   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.462653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:54.462659   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:54.462728   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:54.494599   57945 cri.go:89] found id: ""
	I0610 11:51:54.494631   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.494642   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:54.494650   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:54.494701   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:54.528236   57945 cri.go:89] found id: ""
	I0610 11:51:54.528265   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.528280   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:54.528290   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:54.528305   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:54.579562   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:54.579604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:54.592871   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:54.592899   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:54.661928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:54.661950   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:54.661984   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:54.741578   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:54.741611   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:57.283397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:57.296631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:57.296704   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:57.328185   57945 cri.go:89] found id: ""
	I0610 11:51:57.328217   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.328228   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:57.328237   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:57.328302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:57.360137   57945 cri.go:89] found id: ""
	I0610 11:51:57.360163   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.360173   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:57.360188   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:57.360244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:57.395638   57945 cri.go:89] found id: ""
	I0610 11:51:57.395680   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.395691   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:57.395700   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:57.395765   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:57.429024   57945 cri.go:89] found id: ""
	I0610 11:51:57.429051   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.429062   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:57.429070   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:57.429132   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:57.461726   57945 cri.go:89] found id: ""
	I0610 11:51:57.461757   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.461767   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:57.461773   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:57.461838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:57.495055   57945 cri.go:89] found id: ""
	I0610 11:51:57.495078   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.495086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:57.495092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:57.495138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:57.526495   57945 cri.go:89] found id: ""
	I0610 11:51:57.526521   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.526530   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:57.526536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:57.526598   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:57.559160   57945 cri.go:89] found id: ""
	I0610 11:51:57.559181   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.559189   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:57.559197   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:57.559212   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:57.593801   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:57.593827   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:57.641074   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:57.641106   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:57.654097   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:57.654124   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:57.726137   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:57.726160   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:57.726176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:00.302303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:00.314500   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:00.314560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:00.345865   57945 cri.go:89] found id: ""
	I0610 11:52:00.345889   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.345897   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:00.345902   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:00.345946   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:00.377383   57945 cri.go:89] found id: ""
	I0610 11:52:00.377405   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.377412   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:00.377417   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:00.377482   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:00.408667   57945 cri.go:89] found id: ""
	I0610 11:52:00.408694   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.408701   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:00.408706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:00.408755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:00.444349   57945 cri.go:89] found id: ""
	I0610 11:52:00.444379   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.444390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:00.444397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:00.444455   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:00.477886   57945 cri.go:89] found id: ""
	I0610 11:52:00.477910   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.477918   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:00.477924   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:00.477982   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:00.508996   57945 cri.go:89] found id: ""
	I0610 11:52:00.509023   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.509030   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:00.509036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:00.509097   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:00.541548   57945 cri.go:89] found id: ""
	I0610 11:52:00.541572   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.541580   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:00.541585   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:00.541642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:00.574507   57945 cri.go:89] found id: ""
	I0610 11:52:00.574534   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.574541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:00.574550   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:00.574565   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:00.610838   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:00.610862   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:00.661155   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:00.661197   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:00.674122   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:00.674154   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:00.745943   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:00.745976   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:00.745993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:03.325365   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:03.337955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:03.338042   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:03.370767   57945 cri.go:89] found id: ""
	I0610 11:52:03.370798   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.370810   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:03.370818   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:03.370903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:03.402587   57945 cri.go:89] found id: ""
	I0610 11:52:03.402616   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.402623   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:03.402628   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:03.402684   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:03.436751   57945 cri.go:89] found id: ""
	I0610 11:52:03.436778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.436788   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:03.436795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:03.436854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:03.467745   57945 cri.go:89] found id: ""
	I0610 11:52:03.467778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.467788   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:03.467798   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:03.467865   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:03.499321   57945 cri.go:89] found id: ""
	I0610 11:52:03.499347   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.499355   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:03.499361   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:03.499419   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:03.534209   57945 cri.go:89] found id: ""
	I0610 11:52:03.534242   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.534253   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:03.534261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:03.534318   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:03.567837   57945 cri.go:89] found id: ""
	I0610 11:52:03.567871   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.567882   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:03.567889   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:03.567954   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:03.604223   57945 cri.go:89] found id: ""
	I0610 11:52:03.604249   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.604258   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:03.604266   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:03.604280   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:03.659716   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:03.659751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:03.673389   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:03.673425   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:03.746076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:03.746104   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:03.746118   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:03.825803   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:03.825837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.362151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:06.375320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:06.375394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:06.409805   57945 cri.go:89] found id: ""
	I0610 11:52:06.409840   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.409851   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:06.409859   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:06.409914   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:06.447126   57945 cri.go:89] found id: ""
	I0610 11:52:06.447157   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.447167   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:06.447174   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:06.447237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:06.479443   57945 cri.go:89] found id: ""
	I0610 11:52:06.479472   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.479483   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:06.479489   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:06.479546   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:06.511107   57945 cri.go:89] found id: ""
	I0610 11:52:06.511137   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.511148   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:06.511163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:06.511223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:06.542727   57945 cri.go:89] found id: ""
	I0610 11:52:06.542753   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.542761   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:06.542767   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:06.542812   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:06.582141   57945 cri.go:89] found id: ""
	I0610 11:52:06.582166   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.582174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:06.582180   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:06.582239   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:06.615203   57945 cri.go:89] found id: ""
	I0610 11:52:06.615230   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.615240   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:06.615248   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:06.615314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:06.650286   57945 cri.go:89] found id: ""
	I0610 11:52:06.650310   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.650317   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:06.650326   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:06.650338   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:06.721601   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:06.721631   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:06.721646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:06.794645   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:06.794679   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.830598   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:06.830628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:06.880740   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:06.880786   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:09.394202   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:09.409822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:09.409898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:09.451573   57945 cri.go:89] found id: ""
	I0610 11:52:09.451597   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.451605   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:09.451611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:09.451663   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:09.491039   57945 cri.go:89] found id: ""
	I0610 11:52:09.491069   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.491080   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:09.491087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:09.491147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:09.522023   57945 cri.go:89] found id: ""
	I0610 11:52:09.522050   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.522058   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:09.522063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:09.522108   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:09.554014   57945 cri.go:89] found id: ""
	I0610 11:52:09.554040   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.554048   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:09.554057   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:09.554127   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:09.586285   57945 cri.go:89] found id: ""
	I0610 11:52:09.586318   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.586328   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:09.586336   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:09.586396   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:09.618362   57945 cri.go:89] found id: ""
	I0610 11:52:09.618391   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.618401   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:09.618408   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:09.618465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:09.651067   57945 cri.go:89] found id: ""
	I0610 11:52:09.651097   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.651108   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:09.651116   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:09.651174   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:09.682764   57945 cri.go:89] found id: ""
	I0610 11:52:09.682792   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.682799   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:09.682807   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:09.682819   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:09.755071   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:09.755096   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:09.755109   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:09.833635   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:09.833672   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:09.869744   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:09.869777   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:09.924045   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:09.924079   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:12.438029   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:12.452003   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:12.452070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:12.485680   57945 cri.go:89] found id: ""
	I0610 11:52:12.485711   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.485719   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:12.485725   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:12.485773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:12.519200   57945 cri.go:89] found id: ""
	I0610 11:52:12.519227   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.519238   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:12.519245   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:12.519317   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:12.553154   57945 cri.go:89] found id: ""
	I0610 11:52:12.553179   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.553185   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:12.553191   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:12.553237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:12.584499   57945 cri.go:89] found id: ""
	I0610 11:52:12.584543   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.584555   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:12.584564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:12.584619   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:12.619051   57945 cri.go:89] found id: ""
	I0610 11:52:12.619079   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.619094   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:12.619102   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:12.619165   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:12.653652   57945 cri.go:89] found id: ""
	I0610 11:52:12.653690   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.653702   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:12.653710   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:12.653773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:12.685887   57945 cri.go:89] found id: ""
	I0610 11:52:12.685919   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.685930   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:12.685938   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:12.685997   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:12.719534   57945 cri.go:89] found id: ""
	I0610 11:52:12.719567   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.719578   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:12.719591   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:12.719603   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:12.770689   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:12.770725   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:12.783574   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:12.783604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:12.855492   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:12.855518   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:12.855529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:12.928993   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:12.929037   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:15.487670   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:15.501367   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:15.501437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:15.534205   57945 cri.go:89] found id: ""
	I0610 11:52:15.534248   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.534256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:15.534262   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:15.534315   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:15.570972   57945 cri.go:89] found id: ""
	I0610 11:52:15.571001   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.571008   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:15.571013   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:15.571073   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:15.604233   57945 cri.go:89] found id: ""
	I0610 11:52:15.604258   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.604267   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:15.604273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:15.604328   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:15.637119   57945 cri.go:89] found id: ""
	I0610 11:52:15.637150   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.637159   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:15.637167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:15.637226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:15.670548   57945 cri.go:89] found id: ""
	I0610 11:52:15.670572   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.670580   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:15.670586   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:15.670644   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:15.706374   57945 cri.go:89] found id: ""
	I0610 11:52:15.706398   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.706406   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:15.706412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:15.706457   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:15.742828   57945 cri.go:89] found id: ""
	I0610 11:52:15.742852   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.742859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:15.742865   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:15.742910   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:15.773783   57945 cri.go:89] found id: ""
	I0610 11:52:15.773811   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.773818   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:15.773825   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:15.773835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:15.828725   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:15.828764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:15.842653   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:15.842682   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:15.919771   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:15.919794   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:15.919809   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:15.994439   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:15.994478   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:18.532040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:18.544800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:18.544893   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:18.579148   57945 cri.go:89] found id: ""
	I0610 11:52:18.579172   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.579180   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:18.579186   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:18.579236   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:18.613005   57945 cri.go:89] found id: ""
	I0610 11:52:18.613028   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.613035   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:18.613042   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:18.613094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:18.648843   57945 cri.go:89] found id: ""
	I0610 11:52:18.648870   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.648878   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:18.648883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:18.648939   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:18.678943   57945 cri.go:89] found id: ""
	I0610 11:52:18.678974   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.679014   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:18.679022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:18.679082   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:18.728485   57945 cri.go:89] found id: ""
	I0610 11:52:18.728516   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.728527   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:18.728535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:18.728605   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:18.764320   57945 cri.go:89] found id: ""
	I0610 11:52:18.764352   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.764363   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:18.764370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:18.764431   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:18.797326   57945 cri.go:89] found id: ""
	I0610 11:52:18.797358   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.797369   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:18.797377   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:18.797440   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:18.832517   57945 cri.go:89] found id: ""
	I0610 11:52:18.832552   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.832563   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:18.832574   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:18.832588   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:18.845158   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:18.845192   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:18.915928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:18.915959   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:18.915974   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:18.990583   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:18.990625   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:19.029044   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:19.029069   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.582973   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:21.596373   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:21.596453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:21.633497   57945 cri.go:89] found id: ""
	I0610 11:52:21.633528   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.633538   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:21.633546   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:21.633631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:21.663999   57945 cri.go:89] found id: ""
	I0610 11:52:21.664055   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.664069   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:21.664078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:21.664138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:21.698105   57945 cri.go:89] found id: ""
	I0610 11:52:21.698136   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.698147   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:21.698155   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:21.698213   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:21.730036   57945 cri.go:89] found id: ""
	I0610 11:52:21.730061   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.730068   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:21.730074   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:21.730119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:21.764484   57945 cri.go:89] found id: ""
	I0610 11:52:21.764507   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.764515   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:21.764520   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:21.764575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:21.797366   57945 cri.go:89] found id: ""
	I0610 11:52:21.797397   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.797408   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:21.797415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:21.797478   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:21.832991   57945 cri.go:89] found id: ""
	I0610 11:52:21.833023   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.833030   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:21.833035   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:21.833081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:21.868859   57945 cri.go:89] found id: ""
	I0610 11:52:21.868890   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.868899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:21.868924   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:21.868937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.918976   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:21.919013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:21.934602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:21.934629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:22.002888   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:22.002909   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:22.002920   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:22.082894   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:22.082941   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:24.620683   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:24.634200   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:24.634280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:24.667181   57945 cri.go:89] found id: ""
	I0610 11:52:24.667209   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.667217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:24.667222   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:24.667277   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:24.702114   57945 cri.go:89] found id: ""
	I0610 11:52:24.702142   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.702151   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:24.702158   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:24.702220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:24.734464   57945 cri.go:89] found id: ""
	I0610 11:52:24.734488   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.734497   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:24.734502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:24.734565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:24.767074   57945 cri.go:89] found id: ""
	I0610 11:52:24.767124   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.767132   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:24.767138   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:24.767210   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:24.800328   57945 cri.go:89] found id: ""
	I0610 11:52:24.800358   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.800369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:24.800376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:24.800442   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:24.837785   57945 cri.go:89] found id: ""
	I0610 11:52:24.837814   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.837822   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:24.837828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:24.837878   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:24.874886   57945 cri.go:89] found id: ""
	I0610 11:52:24.874910   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.874917   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:24.874923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:24.874968   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:24.912191   57945 cri.go:89] found id: ""
	I0610 11:52:24.912217   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.912235   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:24.912247   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:24.912265   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:24.968229   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:24.968262   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:24.981018   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:24.981048   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:25.049879   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:25.049907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:25.049922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:25.135103   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:25.135156   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:27.687667   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:27.700418   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:27.700486   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:27.733712   57945 cri.go:89] found id: ""
	I0610 11:52:27.733740   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.733749   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:27.733754   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:27.733839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:27.774063   57945 cri.go:89] found id: ""
	I0610 11:52:27.774089   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.774100   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:27.774108   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:27.774169   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:27.813906   57945 cri.go:89] found id: ""
	I0610 11:52:27.813945   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.813956   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:27.813963   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:27.814031   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:27.845877   57945 cri.go:89] found id: ""
	I0610 11:52:27.845901   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.845909   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:27.845915   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:27.845961   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:27.880094   57945 cri.go:89] found id: ""
	I0610 11:52:27.880139   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.880148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:27.880153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:27.880206   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:27.914308   57945 cri.go:89] found id: ""
	I0610 11:52:27.914332   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.914342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:27.914355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:27.914420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:27.949386   57945 cri.go:89] found id: ""
	I0610 11:52:27.949412   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.949423   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:27.949430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:27.949490   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:27.983901   57945 cri.go:89] found id: ""
	I0610 11:52:27.983927   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.983938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:27.983948   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:27.983963   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:28.032820   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:28.032853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:28.046306   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:28.046332   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:28.120614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:28.120642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:28.120657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:28.202182   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:28.202217   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:30.741274   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:30.754276   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:30.754358   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:30.789142   57945 cri.go:89] found id: ""
	I0610 11:52:30.789174   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.789185   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:30.789193   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:30.789255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:30.822319   57945 cri.go:89] found id: ""
	I0610 11:52:30.822350   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.822362   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:30.822369   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:30.822428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:30.853166   57945 cri.go:89] found id: ""
	I0610 11:52:30.853192   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.853199   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:30.853204   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:30.853271   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:30.892290   57945 cri.go:89] found id: ""
	I0610 11:52:30.892320   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.892331   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:30.892339   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:30.892401   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:30.938603   57945 cri.go:89] found id: ""
	I0610 11:52:30.938629   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.938639   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:30.938646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:30.938703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:30.994532   57945 cri.go:89] found id: ""
	I0610 11:52:30.994567   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.994583   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:30.994589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:30.994649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:31.041818   57945 cri.go:89] found id: ""
	I0610 11:52:31.041847   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.041859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:31.041867   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:31.041923   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:31.079897   57945 cri.go:89] found id: ""
	I0610 11:52:31.079927   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.079938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:31.079951   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:31.079967   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:31.092291   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:31.092321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:31.163921   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:31.163943   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:31.163955   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:31.242247   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:31.242287   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:31.281257   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:31.281286   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:33.837783   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:33.851085   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:33.851164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:33.885285   57945 cri.go:89] found id: ""
	I0610 11:52:33.885314   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.885324   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:33.885332   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:33.885391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:33.924958   57945 cri.go:89] found id: ""
	I0610 11:52:33.924996   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.925006   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:33.925022   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:33.925083   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:33.958563   57945 cri.go:89] found id: ""
	I0610 11:52:33.958589   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.958598   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:33.958606   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:33.958665   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:33.991575   57945 cri.go:89] found id: ""
	I0610 11:52:33.991606   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.991616   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:33.991624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:33.991693   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:34.029700   57945 cri.go:89] found id: ""
	I0610 11:52:34.029729   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.029740   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:34.029748   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:34.029805   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:34.068148   57945 cri.go:89] found id: ""
	I0610 11:52:34.068183   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.068194   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:34.068201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:34.068275   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:34.100735   57945 cri.go:89] found id: ""
	I0610 11:52:34.100760   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.100767   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:34.100772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:34.100817   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:34.132898   57945 cri.go:89] found id: ""
	I0610 11:52:34.132927   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.132937   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:34.132958   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:34.132972   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:34.184690   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:34.184723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:34.199604   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:34.199641   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:34.270744   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:34.270763   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:34.270775   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:34.352291   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:34.352334   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:36.894188   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:36.914098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:36.914158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:36.957378   57945 cri.go:89] found id: ""
	I0610 11:52:36.957408   57945 logs.go:276] 0 containers: []
	W0610 11:52:36.957419   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:36.957427   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:36.957498   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:37.003576   57945 cri.go:89] found id: ""
	I0610 11:52:37.003602   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.003611   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:37.003618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:37.003677   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:37.040221   57945 cri.go:89] found id: ""
	I0610 11:52:37.040245   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.040253   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:37.040259   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:37.040307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:37.078151   57945 cri.go:89] found id: ""
	I0610 11:52:37.078185   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.078195   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:37.078202   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:37.078261   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:37.117446   57945 cri.go:89] found id: ""
	I0610 11:52:37.117468   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.117476   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:37.117482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:37.117548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:37.155320   57945 cri.go:89] found id: ""
	I0610 11:52:37.155344   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.155356   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:37.155364   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:37.155414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:37.192194   57945 cri.go:89] found id: ""
	I0610 11:52:37.192221   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.192230   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:37.192238   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:37.192303   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:37.225567   57945 cri.go:89] found id: ""
	I0610 11:52:37.225594   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.225605   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:37.225616   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:37.225632   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:37.240139   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:37.240164   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:37.307754   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:37.307784   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:37.307801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:37.385929   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:37.385964   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:37.424991   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:37.425029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:39.974839   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:39.988788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:39.988858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:40.025922   57945 cri.go:89] found id: ""
	I0610 11:52:40.025947   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.025954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:40.025967   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:40.026026   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:40.062043   57945 cri.go:89] found id: ""
	I0610 11:52:40.062076   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.062085   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:40.062094   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:40.062158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:40.095441   57945 cri.go:89] found id: ""
	I0610 11:52:40.095465   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.095472   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:40.095478   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:40.095529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:40.127633   57945 cri.go:89] found id: ""
	I0610 11:52:40.127662   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.127672   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:40.127680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:40.127740   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:40.161232   57945 cri.go:89] found id: ""
	I0610 11:52:40.161257   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.161267   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:40.161274   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:40.161334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:40.194491   57945 cri.go:89] found id: ""
	I0610 11:52:40.194521   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.194529   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:40.194535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:40.194583   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:40.226376   57945 cri.go:89] found id: ""
	I0610 11:52:40.226404   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.226411   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:40.226416   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:40.226465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:40.257938   57945 cri.go:89] found id: ""
	I0610 11:52:40.257968   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.257978   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:40.257988   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:40.258004   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:40.327247   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:40.327276   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:40.327291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:40.404231   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:40.404263   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:40.441554   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:40.441585   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:40.491952   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:40.491987   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:43.006217   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:43.019113   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:43.019187   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:43.053010   57945 cri.go:89] found id: ""
	I0610 11:52:43.053035   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.053045   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:43.053051   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:43.053115   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:43.086118   57945 cri.go:89] found id: ""
	I0610 11:52:43.086145   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.086156   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:43.086171   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:43.086235   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:43.117892   57945 cri.go:89] found id: ""
	I0610 11:52:43.117919   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.117929   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:43.117937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:43.118011   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:43.149751   57945 cri.go:89] found id: ""
	I0610 11:52:43.149777   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.149787   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:43.149795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:43.149855   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:43.184215   57945 cri.go:89] found id: ""
	I0610 11:52:43.184250   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.184261   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:43.184268   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:43.184332   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:43.219758   57945 cri.go:89] found id: ""
	I0610 11:52:43.219787   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.219797   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:43.219805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:43.219868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:43.250698   57945 cri.go:89] found id: ""
	I0610 11:52:43.250728   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.250738   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:43.250746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:43.250803   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:43.286526   57945 cri.go:89] found id: ""
	I0610 11:52:43.286556   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.286566   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:43.286576   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:43.286589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:43.362219   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:43.362255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:43.398332   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:43.398366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:43.449468   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:43.449502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:43.462346   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:43.462381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:43.539578   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:46.039720   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:46.052749   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:46.052821   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:46.093110   57945 cri.go:89] found id: ""
	I0610 11:52:46.093139   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.093147   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:46.093152   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:46.093219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:46.130885   57945 cri.go:89] found id: ""
	I0610 11:52:46.130916   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.130924   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:46.130930   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:46.130977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:46.167471   57945 cri.go:89] found id: ""
	I0610 11:52:46.167507   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.167524   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:46.167531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:46.167593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:46.204776   57945 cri.go:89] found id: ""
	I0610 11:52:46.204799   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.204807   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:46.204812   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:46.204860   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:46.244826   57945 cri.go:89] found id: ""
	I0610 11:52:46.244859   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.244869   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:46.244876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:46.244942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:46.281757   57945 cri.go:89] found id: ""
	I0610 11:52:46.281783   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.281791   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:46.281797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:46.281844   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:46.319517   57945 cri.go:89] found id: ""
	I0610 11:52:46.319546   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.319558   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:46.319566   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:46.319636   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:46.355806   57945 cri.go:89] found id: ""
	I0610 11:52:46.355835   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.355846   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:46.355858   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:46.355872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:46.433087   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:46.433131   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:46.468792   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:46.468829   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:46.517931   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:46.517969   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:46.530892   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:46.530935   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:46.592585   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:49.093662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:49.106539   57945 kubeadm.go:591] duration metric: took 4m4.396325615s to restartPrimaryControlPlane
	W0610 11:52:49.106625   57945 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 11:52:49.106658   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:52:53.503059   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.396374472s)
	I0610 11:52:53.503148   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:52:53.518235   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:52:53.529298   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:52:53.539273   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:52:53.539297   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:52:53.539341   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:52:53.548285   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:52:53.548354   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:52:53.557659   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:52:53.569253   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:52:53.569330   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:52:53.579689   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.589800   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:52:53.589865   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.600324   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:52:53.610542   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:52:53.610612   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:52:53.620144   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:52:53.687195   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:52:53.687302   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:52:53.851035   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:52:53.851178   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:52:53.851305   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:52:54.037503   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:52:54.039523   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:52:54.039621   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:52:54.039718   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:52:54.039850   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:52:54.039959   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:52:54.040055   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:52:54.040135   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:52:54.040233   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:52:54.040506   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:52:54.040892   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:52:54.041344   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:52:54.041411   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:52:54.041507   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:52:54.151486   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:52:54.389555   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:52:54.507653   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:52:54.690886   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:52:54.708542   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:52:54.712251   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:52:54.712504   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:52:54.872755   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:52:54.874801   57945 out.go:204]   - Booting up control plane ...
	I0610 11:52:54.874978   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:52:54.883224   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:52:54.885032   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:52:54.886182   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:52:54.891030   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:53:34.892890   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:53:34.893019   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:34.893195   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:39.893441   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:39.893640   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:49.894176   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:49.894368   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:09.895012   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:09.895413   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:49.896623   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:49.896849   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:49.896868   57945 kubeadm.go:309] 
	I0610 11:54:49.896922   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:54:49.897030   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:54:49.897053   57945 kubeadm.go:309] 
	I0610 11:54:49.897121   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:54:49.897157   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:54:49.897308   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:54:49.897322   57945 kubeadm.go:309] 
	I0610 11:54:49.897493   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:54:49.897553   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:54:49.897612   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:54:49.897623   57945 kubeadm.go:309] 
	I0610 11:54:49.897755   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:54:49.897866   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:54:49.897876   57945 kubeadm.go:309] 
	I0610 11:54:49.898032   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:54:49.898139   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:54:49.898253   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:54:49.898357   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:54:49.898365   57945 kubeadm.go:309] 
	I0610 11:54:49.899094   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:54:49.899208   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:54:49.899302   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0610 11:54:49.899441   57945 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0610 11:54:49.899502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:54:50.366528   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:54:50.380107   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:54:50.390067   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:54:50.390089   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:54:50.390132   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:54:50.399159   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:54:50.399222   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:54:50.409346   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:54:50.420402   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:54:50.420458   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:54:50.432874   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.444351   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:54:50.444430   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.458175   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:54:50.468538   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:54:50.468611   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:54:50.480033   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:54:50.543600   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:54:50.543653   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:54:50.682810   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:54:50.682970   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:54:50.683117   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:54:50.877761   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:54:50.879686   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:54:50.879788   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:54:50.879881   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:54:50.880010   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:54:50.880075   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:54:50.880145   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:54:50.880235   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:54:50.880334   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:54:50.880543   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:54:50.880654   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:54:50.880771   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:54:50.880835   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:54:50.880912   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:54:51.326073   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:54:51.537409   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:54:51.721400   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:54:51.884882   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:54:51.904377   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:54:51.906470   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:54:51.906560   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:54:52.065800   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:54:52.067657   57945 out.go:204]   - Booting up control plane ...
	I0610 11:54:52.067807   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:54:52.069012   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:54:52.070508   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:54:52.071669   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:54:52.074772   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:55:32.077145   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:55:32.077542   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:32.077740   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:37.078114   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:37.078357   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:47.078706   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:47.078906   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:07.079053   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:07.079285   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:47.078993   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:47.079439   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:47.079463   57945 kubeadm.go:309] 
	I0610 11:56:47.079513   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:56:47.079597   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:56:47.079629   57945 kubeadm.go:309] 
	I0610 11:56:47.079678   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:56:47.079718   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:56:47.079865   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:56:47.079876   57945 kubeadm.go:309] 
	I0610 11:56:47.080014   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:56:47.080077   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:56:47.080132   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:56:47.080151   57945 kubeadm.go:309] 
	I0610 11:56:47.080280   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:56:47.080377   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:56:47.080389   57945 kubeadm.go:309] 
	I0610 11:56:47.080543   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:56:47.080663   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:56:47.080769   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:56:47.080862   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:56:47.080874   57945 kubeadm.go:309] 
	I0610 11:56:47.081877   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:56:47.082023   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:56:47.082137   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0610 11:56:47.082233   57945 kubeadm.go:393] duration metric: took 8m2.423366884s to StartCluster
	I0610 11:56:47.082273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:56:47.082325   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:56:47.130548   57945 cri.go:89] found id: ""
	I0610 11:56:47.130585   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.130596   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:56:47.130603   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:56:47.130673   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:56:47.170087   57945 cri.go:89] found id: ""
	I0610 11:56:47.170124   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.170136   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:56:47.170144   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:56:47.170219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:56:47.210394   57945 cri.go:89] found id: ""
	I0610 11:56:47.210430   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.210442   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:56:47.210450   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:56:47.210532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:56:47.246002   57945 cri.go:89] found id: ""
	I0610 11:56:47.246032   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.246043   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:56:47.246051   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:56:47.246119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:56:47.282333   57945 cri.go:89] found id: ""
	I0610 11:56:47.282361   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.282369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:56:47.282375   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:56:47.282432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:56:47.316205   57945 cri.go:89] found id: ""
	I0610 11:56:47.316241   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.316254   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:56:47.316262   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:56:47.316323   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:56:47.356012   57945 cri.go:89] found id: ""
	I0610 11:56:47.356047   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.356060   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:56:47.356069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:56:47.356140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:56:47.404624   57945 cri.go:89] found id: ""
	I0610 11:56:47.404655   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.404666   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:56:47.404678   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:56:47.404694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:56:47.475236   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:56:47.475282   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:56:47.493382   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:56:47.493418   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:56:47.589894   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:56:47.589918   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:56:47.589934   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:56:47.726080   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:56:47.726123   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0610 11:56:47.770399   57945 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0610 11:56:47.770451   57945 out.go:239] * 
	* 
	W0610 11:56:47.770532   57945 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.770558   57945 out.go:239] * 
	* 
	W0610 11:56:47.771459   57945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:56:47.775172   57945 out.go:177] 
	W0610 11:56:47.776444   57945 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.776509   57945 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0610 11:56:47.776545   57945 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0610 11:56:47.778306   57945 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-166693 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 2 (243.673052ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-166693 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-324836                              | cert-expiration-324836       | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-036579 | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	|         | disable-driver-mounts-036579                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-832735            | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC | 10 Jun 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	| addons  | enable metrics-server -p no-preload-298179             | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-832735                 | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-166693        | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-298179                  | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:49 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-166693             | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281114  | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC | 10 Jun 24 11:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC |                     |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281114       | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC |                     |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 11:51:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 11:51:53.675460   60146 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:51:53.675676   60146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:51:53.675684   60146 out.go:304] Setting ErrFile to fd 2...
	I0610 11:51:53.675688   60146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:51:53.675848   60146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:51:53.676386   60146 out.go:298] Setting JSON to false
	I0610 11:51:53.677403   60146 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5655,"bootTime":1718014659,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:51:53.677465   60146 start.go:139] virtualization: kvm guest
	I0610 11:51:53.679851   60146 out.go:177] * [default-k8s-diff-port-281114] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:51:53.681209   60146 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:51:53.682492   60146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:51:53.681162   60146 notify.go:220] Checking for updates...
	I0610 11:51:53.683939   60146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:51:53.685202   60146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:51:53.686363   60146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:51:53.687770   60146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:51:53.689668   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:51:53.690093   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.690167   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.705134   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0610 11:51:53.705647   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.706289   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.706314   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.706603   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.706788   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.707058   60146 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:51:53.707411   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.707451   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.722927   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0610 11:51:53.723433   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.723927   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.723953   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.724482   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.724651   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.763209   60146 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 11:51:53.764436   60146 start.go:297] selected driver: kvm2
	I0610 11:51:53.764446   60146 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:51:53.764537   60146 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:51:53.765172   60146 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:51:53.765257   60146 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 11:51:53.782641   60146 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 11:51:53.783044   60146 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:51:53.783099   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:51:53.783109   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:51:53.783152   60146 start.go:340] cluster config:
	{Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:51:53.783254   60146 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:51:53.786018   60146 out.go:177] * Starting "default-k8s-diff-port-281114" primary control-plane node in "default-k8s-diff-port-281114" cluster
	I0610 11:51:53.787303   60146 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:51:53.787344   60146 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 11:51:53.787357   60146 cache.go:56] Caching tarball of preloaded images
	I0610 11:51:53.787439   60146 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:51:53.787455   60146 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 11:51:53.787569   60146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/config.json ...
	I0610 11:51:53.787799   60146 start.go:360] acquireMachinesLock for default-k8s-diff-port-281114: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:51:53.787855   60146 start.go:364] duration metric: took 30.27µs to acquireMachinesLock for "default-k8s-diff-port-281114"
	I0610 11:51:53.787875   60146 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:51:53.787881   60146 fix.go:54] fixHost starting: 
	I0610 11:51:53.788131   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.788165   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.805744   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0610 11:51:53.806279   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.806909   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.806936   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.807346   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.807532   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.807718   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 11:51:53.809469   60146 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281114: state=Running err=<nil>
	W0610 11:51:53.809507   60146 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:51:53.811518   60146 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-281114" VM ...
	I0610 11:51:50.691535   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:52.691588   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:54.692007   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:54.248038   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:54.261302   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:54.261375   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:54.293194   57945 cri.go:89] found id: ""
	I0610 11:51:54.293228   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.293240   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:54.293247   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:54.293307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:54.326656   57945 cri.go:89] found id: ""
	I0610 11:51:54.326687   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.326699   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:54.326707   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:54.326764   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:54.359330   57945 cri.go:89] found id: ""
	I0610 11:51:54.359365   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.359378   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:54.359386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:54.359450   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:54.391520   57945 cri.go:89] found id: ""
	I0610 11:51:54.391549   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.391558   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:54.391565   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:54.391642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:54.426803   57945 cri.go:89] found id: ""
	I0610 11:51:54.426840   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.426850   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:54.426860   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:54.426936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:54.462618   57945 cri.go:89] found id: ""
	I0610 11:51:54.462645   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.462653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:54.462659   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:54.462728   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:54.494599   57945 cri.go:89] found id: ""
	I0610 11:51:54.494631   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.494642   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:54.494650   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:54.494701   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:54.528236   57945 cri.go:89] found id: ""
	I0610 11:51:54.528265   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.528280   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:54.528290   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:54.528305   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:54.579562   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:54.579604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:54.592871   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:54.592899   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:54.661928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:54.661950   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:54.661984   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:54.741578   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:54.741611   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:53.939312   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:55.940181   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:53.812752   60146 machine.go:94] provisionDockerMachine start ...
	I0610 11:51:53.812779   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.813001   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:51:53.815580   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:51:53.815981   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:47:50 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:51:53.816013   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:51:53.816111   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:51:53.816288   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:51:53.816435   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:51:53.816577   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:51:53.816743   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:51:53.817141   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:51:53.817157   60146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:51:56.705435   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:51:56.692515   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:59.192511   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:57.283397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:57.296631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:57.296704   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:57.328185   57945 cri.go:89] found id: ""
	I0610 11:51:57.328217   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.328228   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:57.328237   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:57.328302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:57.360137   57945 cri.go:89] found id: ""
	I0610 11:51:57.360163   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.360173   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:57.360188   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:57.360244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:57.395638   57945 cri.go:89] found id: ""
	I0610 11:51:57.395680   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.395691   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:57.395700   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:57.395765   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:57.429024   57945 cri.go:89] found id: ""
	I0610 11:51:57.429051   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.429062   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:57.429070   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:57.429132   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:57.461726   57945 cri.go:89] found id: ""
	I0610 11:51:57.461757   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.461767   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:57.461773   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:57.461838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:57.495055   57945 cri.go:89] found id: ""
	I0610 11:51:57.495078   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.495086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:57.495092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:57.495138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:57.526495   57945 cri.go:89] found id: ""
	I0610 11:51:57.526521   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.526530   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:57.526536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:57.526598   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:57.559160   57945 cri.go:89] found id: ""
	I0610 11:51:57.559181   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.559189   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:57.559197   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:57.559212   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:57.593801   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:57.593827   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:57.641074   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:57.641106   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:57.654097   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:57.654124   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:57.726137   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:57.726160   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:57.726176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:00.302303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:00.314500   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:00.314560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:00.345865   57945 cri.go:89] found id: ""
	I0610 11:52:00.345889   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.345897   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:00.345902   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:00.345946   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:00.377383   57945 cri.go:89] found id: ""
	I0610 11:52:00.377405   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.377412   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:00.377417   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:00.377482   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:00.408667   57945 cri.go:89] found id: ""
	I0610 11:52:00.408694   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.408701   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:00.408706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:00.408755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:00.444349   57945 cri.go:89] found id: ""
	I0610 11:52:00.444379   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.444390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:00.444397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:00.444455   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:00.477886   57945 cri.go:89] found id: ""
	I0610 11:52:00.477910   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.477918   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:00.477924   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:00.477982   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:00.508996   57945 cri.go:89] found id: ""
	I0610 11:52:00.509023   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.509030   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:00.509036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:00.509097   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:00.541548   57945 cri.go:89] found id: ""
	I0610 11:52:00.541572   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.541580   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:00.541585   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:00.541642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:00.574507   57945 cri.go:89] found id: ""
	I0610 11:52:00.574534   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.574541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:00.574550   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:00.574565   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:00.610838   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:00.610862   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:00.661155   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:00.661197   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:00.674122   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:00.674154   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:00.745943   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:00.745976   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:00.745993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:58.439245   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:00.441145   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:59.777253   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:01.691833   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:04.193279   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:03.325365   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:03.337955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:03.338042   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:03.370767   57945 cri.go:89] found id: ""
	I0610 11:52:03.370798   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.370810   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:03.370818   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:03.370903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:03.402587   57945 cri.go:89] found id: ""
	I0610 11:52:03.402616   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.402623   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:03.402628   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:03.402684   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:03.436751   57945 cri.go:89] found id: ""
	I0610 11:52:03.436778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.436788   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:03.436795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:03.436854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:03.467745   57945 cri.go:89] found id: ""
	I0610 11:52:03.467778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.467788   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:03.467798   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:03.467865   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:03.499321   57945 cri.go:89] found id: ""
	I0610 11:52:03.499347   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.499355   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:03.499361   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:03.499419   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:03.534209   57945 cri.go:89] found id: ""
	I0610 11:52:03.534242   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.534253   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:03.534261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:03.534318   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:03.567837   57945 cri.go:89] found id: ""
	I0610 11:52:03.567871   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.567882   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:03.567889   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:03.567954   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:03.604223   57945 cri.go:89] found id: ""
	I0610 11:52:03.604249   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.604258   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:03.604266   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:03.604280   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:03.659716   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:03.659751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:03.673389   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:03.673425   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:03.746076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:03.746104   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:03.746118   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:03.825803   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:03.825837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.362151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:06.375320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:06.375394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:06.409805   57945 cri.go:89] found id: ""
	I0610 11:52:06.409840   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.409851   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:06.409859   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:06.409914   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:06.447126   57945 cri.go:89] found id: ""
	I0610 11:52:06.447157   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.447167   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:06.447174   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:06.447237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:06.479443   57945 cri.go:89] found id: ""
	I0610 11:52:06.479472   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.479483   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:06.479489   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:06.479546   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:06.511107   57945 cri.go:89] found id: ""
	I0610 11:52:06.511137   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.511148   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:06.511163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:06.511223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:06.542727   57945 cri.go:89] found id: ""
	I0610 11:52:06.542753   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.542761   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:06.542767   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:06.542812   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:06.582141   57945 cri.go:89] found id: ""
	I0610 11:52:06.582166   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.582174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:06.582180   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:06.582239   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:06.615203   57945 cri.go:89] found id: ""
	I0610 11:52:06.615230   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.615240   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:06.615248   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:06.615314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:06.650286   57945 cri.go:89] found id: ""
	I0610 11:52:06.650310   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.650317   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:06.650326   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:06.650338   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:06.721601   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:06.721631   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:06.721646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:06.794645   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:06.794679   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.830598   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:06.830628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:06.880740   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:06.880786   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:02.939105   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:04.939366   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:07.439715   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:05.861224   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:06.691130   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:09.191608   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:09.394202   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:09.409822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:09.409898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:09.451573   57945 cri.go:89] found id: ""
	I0610 11:52:09.451597   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.451605   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:09.451611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:09.451663   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:09.491039   57945 cri.go:89] found id: ""
	I0610 11:52:09.491069   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.491080   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:09.491087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:09.491147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:09.522023   57945 cri.go:89] found id: ""
	I0610 11:52:09.522050   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.522058   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:09.522063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:09.522108   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:09.554014   57945 cri.go:89] found id: ""
	I0610 11:52:09.554040   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.554048   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:09.554057   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:09.554127   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:09.586285   57945 cri.go:89] found id: ""
	I0610 11:52:09.586318   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.586328   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:09.586336   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:09.586396   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:09.618362   57945 cri.go:89] found id: ""
	I0610 11:52:09.618391   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.618401   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:09.618408   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:09.618465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:09.651067   57945 cri.go:89] found id: ""
	I0610 11:52:09.651097   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.651108   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:09.651116   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:09.651174   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:09.682764   57945 cri.go:89] found id: ""
	I0610 11:52:09.682792   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.682799   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:09.682807   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:09.682819   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:09.755071   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:09.755096   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:09.755109   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:09.833635   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:09.833672   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:09.869744   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:09.869777   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:09.924045   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:09.924079   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:09.440296   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:11.939025   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:08.929184   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:11.691213   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:13.693439   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:12.438029   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:12.452003   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:12.452070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:12.485680   57945 cri.go:89] found id: ""
	I0610 11:52:12.485711   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.485719   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:12.485725   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:12.485773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:12.519200   57945 cri.go:89] found id: ""
	I0610 11:52:12.519227   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.519238   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:12.519245   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:12.519317   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:12.553154   57945 cri.go:89] found id: ""
	I0610 11:52:12.553179   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.553185   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:12.553191   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:12.553237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:12.584499   57945 cri.go:89] found id: ""
	I0610 11:52:12.584543   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.584555   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:12.584564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:12.584619   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:12.619051   57945 cri.go:89] found id: ""
	I0610 11:52:12.619079   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.619094   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:12.619102   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:12.619165   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:12.653652   57945 cri.go:89] found id: ""
	I0610 11:52:12.653690   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.653702   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:12.653710   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:12.653773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:12.685887   57945 cri.go:89] found id: ""
	I0610 11:52:12.685919   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.685930   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:12.685938   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:12.685997   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:12.719534   57945 cri.go:89] found id: ""
	I0610 11:52:12.719567   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.719578   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:12.719591   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:12.719603   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:12.770689   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:12.770725   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:12.783574   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:12.783604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:12.855492   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:12.855518   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:12.855529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:12.928993   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:12.929037   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:15.487670   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:15.501367   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:15.501437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:15.534205   57945 cri.go:89] found id: ""
	I0610 11:52:15.534248   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.534256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:15.534262   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:15.534315   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:15.570972   57945 cri.go:89] found id: ""
	I0610 11:52:15.571001   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.571008   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:15.571013   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:15.571073   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:15.604233   57945 cri.go:89] found id: ""
	I0610 11:52:15.604258   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.604267   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:15.604273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:15.604328   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:15.637119   57945 cri.go:89] found id: ""
	I0610 11:52:15.637150   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.637159   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:15.637167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:15.637226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:15.670548   57945 cri.go:89] found id: ""
	I0610 11:52:15.670572   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.670580   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:15.670586   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:15.670644   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:15.706374   57945 cri.go:89] found id: ""
	I0610 11:52:15.706398   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.706406   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:15.706412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:15.706457   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:15.742828   57945 cri.go:89] found id: ""
	I0610 11:52:15.742852   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.742859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:15.742865   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:15.742910   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:15.773783   57945 cri.go:89] found id: ""
	I0610 11:52:15.773811   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.773818   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:15.773825   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:15.773835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:15.828725   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:15.828764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:15.842653   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:15.842682   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:15.919771   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:15.919794   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:15.919809   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:15.994439   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:15.994478   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:13.943213   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:16.439647   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:15.009211   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:18.081244   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:16.191615   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:18.191760   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:18.532040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:18.544800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:18.544893   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:18.579148   57945 cri.go:89] found id: ""
	I0610 11:52:18.579172   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.579180   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:18.579186   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:18.579236   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:18.613005   57945 cri.go:89] found id: ""
	I0610 11:52:18.613028   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.613035   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:18.613042   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:18.613094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:18.648843   57945 cri.go:89] found id: ""
	I0610 11:52:18.648870   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.648878   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:18.648883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:18.648939   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:18.678943   57945 cri.go:89] found id: ""
	I0610 11:52:18.678974   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.679014   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:18.679022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:18.679082   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:18.728485   57945 cri.go:89] found id: ""
	I0610 11:52:18.728516   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.728527   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:18.728535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:18.728605   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:18.764320   57945 cri.go:89] found id: ""
	I0610 11:52:18.764352   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.764363   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:18.764370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:18.764431   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:18.797326   57945 cri.go:89] found id: ""
	I0610 11:52:18.797358   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.797369   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:18.797377   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:18.797440   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:18.832517   57945 cri.go:89] found id: ""
	I0610 11:52:18.832552   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.832563   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:18.832574   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:18.832588   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:18.845158   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:18.845192   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:18.915928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:18.915959   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:18.915974   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:18.990583   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:18.990625   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:19.029044   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:19.029069   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.582973   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:21.596373   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:21.596453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:21.633497   57945 cri.go:89] found id: ""
	I0610 11:52:21.633528   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.633538   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:21.633546   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:21.633631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:21.663999   57945 cri.go:89] found id: ""
	I0610 11:52:21.664055   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.664069   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:21.664078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:21.664138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:21.698105   57945 cri.go:89] found id: ""
	I0610 11:52:21.698136   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.698147   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:21.698155   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:21.698213   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:21.730036   57945 cri.go:89] found id: ""
	I0610 11:52:21.730061   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.730068   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:21.730074   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:21.730119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:21.764484   57945 cri.go:89] found id: ""
	I0610 11:52:21.764507   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.764515   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:21.764520   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:21.764575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:21.797366   57945 cri.go:89] found id: ""
	I0610 11:52:21.797397   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.797408   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:21.797415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:21.797478   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:21.832991   57945 cri.go:89] found id: ""
	I0610 11:52:21.833023   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.833030   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:21.833035   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:21.833081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:21.868859   57945 cri.go:89] found id: ""
	I0610 11:52:21.868890   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.868899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:21.868924   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:21.868937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.918976   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:21.919013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:21.934602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:21.934629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:22.002888   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:22.002909   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:22.002920   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:22.082894   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:22.082941   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:18.439853   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:20.942040   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:20.692398   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:23.191532   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:24.620683   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:24.634200   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:24.634280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:24.667181   57945 cri.go:89] found id: ""
	I0610 11:52:24.667209   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.667217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:24.667222   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:24.667277   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:24.702114   57945 cri.go:89] found id: ""
	I0610 11:52:24.702142   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.702151   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:24.702158   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:24.702220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:24.734464   57945 cri.go:89] found id: ""
	I0610 11:52:24.734488   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.734497   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:24.734502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:24.734565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:24.767074   57945 cri.go:89] found id: ""
	I0610 11:52:24.767124   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.767132   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:24.767138   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:24.767210   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:24.800328   57945 cri.go:89] found id: ""
	I0610 11:52:24.800358   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.800369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:24.800376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:24.800442   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:24.837785   57945 cri.go:89] found id: ""
	I0610 11:52:24.837814   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.837822   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:24.837828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:24.837878   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:24.874886   57945 cri.go:89] found id: ""
	I0610 11:52:24.874910   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.874917   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:24.874923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:24.874968   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:24.912191   57945 cri.go:89] found id: ""
	I0610 11:52:24.912217   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.912235   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:24.912247   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:24.912265   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:24.968229   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:24.968262   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:24.981018   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:24.981048   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:25.049879   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:25.049907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:25.049922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:25.135103   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:25.135156   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:23.440293   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:25.939540   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.201186   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:25.691136   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.691669   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.687667   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:27.700418   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:27.700486   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:27.733712   57945 cri.go:89] found id: ""
	I0610 11:52:27.733740   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.733749   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:27.733754   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:27.733839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:27.774063   57945 cri.go:89] found id: ""
	I0610 11:52:27.774089   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.774100   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:27.774108   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:27.774169   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:27.813906   57945 cri.go:89] found id: ""
	I0610 11:52:27.813945   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.813956   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:27.813963   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:27.814031   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:27.845877   57945 cri.go:89] found id: ""
	I0610 11:52:27.845901   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.845909   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:27.845915   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:27.845961   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:27.880094   57945 cri.go:89] found id: ""
	I0610 11:52:27.880139   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.880148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:27.880153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:27.880206   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:27.914308   57945 cri.go:89] found id: ""
	I0610 11:52:27.914332   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.914342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:27.914355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:27.914420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:27.949386   57945 cri.go:89] found id: ""
	I0610 11:52:27.949412   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.949423   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:27.949430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:27.949490   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:27.983901   57945 cri.go:89] found id: ""
	I0610 11:52:27.983927   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.983938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:27.983948   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:27.983963   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:28.032820   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:28.032853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:28.046306   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:28.046332   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:28.120614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:28.120642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:28.120657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:28.202182   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:28.202217   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:30.741274   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:30.754276   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:30.754358   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:30.789142   57945 cri.go:89] found id: ""
	I0610 11:52:30.789174   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.789185   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:30.789193   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:30.789255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:30.822319   57945 cri.go:89] found id: ""
	I0610 11:52:30.822350   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.822362   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:30.822369   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:30.822428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:30.853166   57945 cri.go:89] found id: ""
	I0610 11:52:30.853192   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.853199   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:30.853204   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:30.853271   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:30.892290   57945 cri.go:89] found id: ""
	I0610 11:52:30.892320   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.892331   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:30.892339   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:30.892401   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:30.938603   57945 cri.go:89] found id: ""
	I0610 11:52:30.938629   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.938639   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:30.938646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:30.938703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:30.994532   57945 cri.go:89] found id: ""
	I0610 11:52:30.994567   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.994583   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:30.994589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:30.994649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:31.041818   57945 cri.go:89] found id: ""
	I0610 11:52:31.041847   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.041859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:31.041867   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:31.041923   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:31.079897   57945 cri.go:89] found id: ""
	I0610 11:52:31.079927   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.079938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:31.079951   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:31.079967   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:31.092291   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:31.092321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:31.163921   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:31.163943   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:31.163955   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:31.242247   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:31.242287   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:31.281257   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:31.281286   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:27.940743   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:30.440529   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:30.273256   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:30.192386   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:32.192470   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:34.691408   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:33.837783   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:33.851085   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:33.851164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:33.885285   57945 cri.go:89] found id: ""
	I0610 11:52:33.885314   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.885324   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:33.885332   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:33.885391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:33.924958   57945 cri.go:89] found id: ""
	I0610 11:52:33.924996   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.925006   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:33.925022   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:33.925083   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:33.958563   57945 cri.go:89] found id: ""
	I0610 11:52:33.958589   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.958598   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:33.958606   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:33.958665   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:33.991575   57945 cri.go:89] found id: ""
	I0610 11:52:33.991606   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.991616   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:33.991624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:33.991693   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:34.029700   57945 cri.go:89] found id: ""
	I0610 11:52:34.029729   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.029740   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:34.029748   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:34.029805   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:34.068148   57945 cri.go:89] found id: ""
	I0610 11:52:34.068183   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.068194   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:34.068201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:34.068275   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:34.100735   57945 cri.go:89] found id: ""
	I0610 11:52:34.100760   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.100767   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:34.100772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:34.100817   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:34.132898   57945 cri.go:89] found id: ""
	I0610 11:52:34.132927   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.132937   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:34.132958   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:34.132972   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:34.184690   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:34.184723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:34.199604   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:34.199641   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:34.270744   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:34.270763   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:34.270775   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:34.352291   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:34.352334   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:36.894188   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:36.914098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:36.914158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:36.957378   57945 cri.go:89] found id: ""
	I0610 11:52:36.957408   57945 logs.go:276] 0 containers: []
	W0610 11:52:36.957419   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:36.957427   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:36.957498   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:37.003576   57945 cri.go:89] found id: ""
	I0610 11:52:37.003602   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.003611   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:37.003618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:37.003677   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:37.040221   57945 cri.go:89] found id: ""
	I0610 11:52:37.040245   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.040253   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:37.040259   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:37.040307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:37.078151   57945 cri.go:89] found id: ""
	I0610 11:52:37.078185   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.078195   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:37.078202   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:37.078261   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:37.117446   57945 cri.go:89] found id: ""
	I0610 11:52:37.117468   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.117476   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:37.117482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:37.117548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:37.155320   57945 cri.go:89] found id: ""
	I0610 11:52:37.155344   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.155356   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:37.155364   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:37.155414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:37.192194   57945 cri.go:89] found id: ""
	I0610 11:52:37.192221   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.192230   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:37.192238   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:37.192303   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:37.225567   57945 cri.go:89] found id: ""
	I0610 11:52:37.225594   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.225605   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:37.225616   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:37.225632   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:37.240139   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:37.240164   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:52:32.940571   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:34.940672   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:37.440898   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:36.353199   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:36.697419   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:39.190952   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	W0610 11:52:37.307754   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:37.307784   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:37.307801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:37.385929   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:37.385964   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:37.424991   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:37.425029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:39.974839   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:39.988788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:39.988858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:40.025922   57945 cri.go:89] found id: ""
	I0610 11:52:40.025947   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.025954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:40.025967   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:40.026026   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:40.062043   57945 cri.go:89] found id: ""
	I0610 11:52:40.062076   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.062085   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:40.062094   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:40.062158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:40.095441   57945 cri.go:89] found id: ""
	I0610 11:52:40.095465   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.095472   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:40.095478   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:40.095529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:40.127633   57945 cri.go:89] found id: ""
	I0610 11:52:40.127662   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.127672   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:40.127680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:40.127740   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:40.161232   57945 cri.go:89] found id: ""
	I0610 11:52:40.161257   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.161267   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:40.161274   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:40.161334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:40.194491   57945 cri.go:89] found id: ""
	I0610 11:52:40.194521   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.194529   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:40.194535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:40.194583   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:40.226376   57945 cri.go:89] found id: ""
	I0610 11:52:40.226404   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.226411   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:40.226416   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:40.226465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:40.257938   57945 cri.go:89] found id: ""
	I0610 11:52:40.257968   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.257978   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:40.257988   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:40.258004   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:40.327247   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:40.327276   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:40.327291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:40.404231   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:40.404263   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:40.441554   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:40.441585   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:40.491952   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:40.491987   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:39.939538   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:41.939639   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:39.425159   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:41.191808   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:43.695646   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:43.006217   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:43.019113   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:43.019187   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:43.053010   57945 cri.go:89] found id: ""
	I0610 11:52:43.053035   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.053045   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:43.053051   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:43.053115   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:43.086118   57945 cri.go:89] found id: ""
	I0610 11:52:43.086145   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.086156   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:43.086171   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:43.086235   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:43.117892   57945 cri.go:89] found id: ""
	I0610 11:52:43.117919   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.117929   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:43.117937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:43.118011   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:43.149751   57945 cri.go:89] found id: ""
	I0610 11:52:43.149777   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.149787   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:43.149795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:43.149855   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:43.184215   57945 cri.go:89] found id: ""
	I0610 11:52:43.184250   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.184261   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:43.184268   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:43.184332   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:43.219758   57945 cri.go:89] found id: ""
	I0610 11:52:43.219787   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.219797   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:43.219805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:43.219868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:43.250698   57945 cri.go:89] found id: ""
	I0610 11:52:43.250728   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.250738   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:43.250746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:43.250803   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:43.286526   57945 cri.go:89] found id: ""
	I0610 11:52:43.286556   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.286566   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:43.286576   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:43.286589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:43.362219   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:43.362255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:43.398332   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:43.398366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:43.449468   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:43.449502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:43.462346   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:43.462381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:43.539578   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:46.039720   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:46.052749   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:46.052821   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:46.093110   57945 cri.go:89] found id: ""
	I0610 11:52:46.093139   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.093147   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:46.093152   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:46.093219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:46.130885   57945 cri.go:89] found id: ""
	I0610 11:52:46.130916   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.130924   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:46.130930   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:46.130977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:46.167471   57945 cri.go:89] found id: ""
	I0610 11:52:46.167507   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.167524   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:46.167531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:46.167593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:46.204776   57945 cri.go:89] found id: ""
	I0610 11:52:46.204799   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.204807   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:46.204812   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:46.204860   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:46.244826   57945 cri.go:89] found id: ""
	I0610 11:52:46.244859   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.244869   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:46.244876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:46.244942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:46.281757   57945 cri.go:89] found id: ""
	I0610 11:52:46.281783   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.281791   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:46.281797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:46.281844   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:46.319517   57945 cri.go:89] found id: ""
	I0610 11:52:46.319546   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.319558   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:46.319566   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:46.319636   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:46.355806   57945 cri.go:89] found id: ""
	I0610 11:52:46.355835   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.355846   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:46.355858   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:46.355872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:46.433087   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:46.433131   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:46.468792   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:46.468829   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:46.517931   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:46.517969   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:46.530892   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:46.530935   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:46.592585   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:43.940733   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:46.440354   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:45.505281   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:48.577214   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:46.191520   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:48.691214   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:49.093662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:49.106539   57945 kubeadm.go:591] duration metric: took 4m4.396325615s to restartPrimaryControlPlane
	W0610 11:52:49.106625   57945 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 11:52:49.106658   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:52:48.441202   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:50.433923   57572 pod_ready.go:81] duration metric: took 4m0.000312516s for pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace to be "Ready" ...
	E0610 11:52:50.433960   57572 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0610 11:52:50.433982   57572 pod_ready.go:38] duration metric: took 4m5.113212783s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:52:50.434008   57572 kubeadm.go:591] duration metric: took 4m16.406085019s to restartPrimaryControlPlane
	W0610 11:52:50.434091   57572 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 11:52:50.434128   57572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:52:53.503059   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.396374472s)
	I0610 11:52:53.503148   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:52:53.518235   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:52:53.529298   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:52:53.539273   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:52:53.539297   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:52:53.539341   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:52:53.548285   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:52:53.548354   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:52:53.557659   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:52:53.569253   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:52:53.569330   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:52:53.579689   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.589800   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:52:53.589865   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.600324   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:52:53.610542   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:52:53.610612   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:52:53.620144   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:52:53.687195   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:52:53.687302   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:52:53.851035   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:52:53.851178   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:52:53.851305   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:52:54.037503   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:52:54.039523   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:52:54.039621   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:52:54.039718   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:52:54.039850   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:52:54.039959   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:52:54.040055   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:52:54.040135   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:52:54.040233   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:52:54.040506   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:52:54.040892   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:52:54.041344   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:52:54.041411   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:52:54.041507   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:52:54.151486   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:52:54.389555   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:52:54.507653   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:52:54.690886   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:52:54.708542   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:52:54.712251   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:52:54.712504   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:52:54.872755   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:52:50.691517   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:53.191418   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:54.874801   57945 out.go:204]   - Booting up control plane ...
	I0610 11:52:54.874978   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:52:54.883224   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:52:54.885032   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:52:54.886182   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:52:54.891030   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:52:54.661214   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:57.729160   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:55.691987   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:58.192548   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:00.692060   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:03.192673   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:03.809217   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:06.885176   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:05.692004   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:07.692545   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:12.961318   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:10.191064   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:12.192258   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:14.691564   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:16.033278   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:16.691670   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:18.691801   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:21.778313   57572 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.344150357s)
	I0610 11:53:21.778398   57572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:21.793960   57572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:53:21.803952   57572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:53:21.813685   57572 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:53:21.813709   57572 kubeadm.go:156] found existing configuration files:
	
	I0610 11:53:21.813758   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:53:21.823957   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:53:21.824027   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:53:21.833125   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:53:21.841834   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:53:21.841893   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:53:21.850999   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:53:21.859858   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:53:21.859920   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:53:21.869076   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:53:21.877079   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:53:21.877141   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:53:21.887614   57572 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:53:21.941932   57572 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 11:53:21.941987   57572 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:53:22.084118   57572 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:53:22.084219   57572 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:53:22.084310   57572 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:53:22.287685   57572 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:53:22.289568   57572 out.go:204]   - Generating certificates and keys ...
	I0610 11:53:22.289674   57572 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:53:22.289779   57572 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:53:22.289917   57572 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:53:22.290032   57572 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:53:22.290144   57572 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:53:22.290234   57572 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:53:22.290339   57572 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:53:22.290439   57572 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:53:22.290558   57572 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:53:22.290674   57572 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:53:22.290732   57572 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:53:22.290819   57572 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:53:22.354674   57572 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:53:22.573948   57572 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 11:53:22.805694   57572 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:53:22.914740   57572 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:53:23.218887   57572 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:53:23.221479   57572 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:53:23.223937   57572 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:53:22.113312   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:20.692241   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:23.192124   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:23.695912   56769 pod_ready.go:81] duration metric: took 4m0.01073501s for pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace to be "Ready" ...
	E0610 11:53:23.695944   56769 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0610 11:53:23.695954   56769 pod_ready.go:38] duration metric: took 4m2.412094982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:23.695972   56769 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:53:23.696001   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:23.696058   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:23.758822   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:23.758850   56769 cri.go:89] found id: ""
	I0610 11:53:23.758860   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:23.758921   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.765128   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:23.765198   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:23.798454   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:23.798483   56769 cri.go:89] found id: ""
	I0610 11:53:23.798494   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:23.798560   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.802985   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:23.803051   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:23.855781   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:23.855810   56769 cri.go:89] found id: ""
	I0610 11:53:23.855819   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:23.855873   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.860285   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:23.860363   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:23.901849   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:23.901868   56769 cri.go:89] found id: ""
	I0610 11:53:23.901878   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:23.901935   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.906116   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:23.906183   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:23.941376   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:23.941396   56769 cri.go:89] found id: ""
	I0610 11:53:23.941405   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:23.941463   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.947379   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:23.947450   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:23.984733   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:23.984757   56769 cri.go:89] found id: ""
	I0610 11:53:23.984766   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:23.984839   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.988701   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:23.988752   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:24.024067   56769 cri.go:89] found id: ""
	I0610 11:53:24.024094   56769 logs.go:276] 0 containers: []
	W0610 11:53:24.024103   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:24.024110   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:24.024170   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:24.058220   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:24.058250   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:24.058255   56769 cri.go:89] found id: ""
	I0610 11:53:24.058263   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:24.058321   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:24.062072   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:24.065706   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:24.065723   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:24.104622   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:24.104652   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:24.142432   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:24.142457   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:24.670328   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:24.670375   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:24.726557   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:24.726592   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:24.769111   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:24.769150   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:24.811199   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:24.811246   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:24.876489   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:24.876547   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:23.225694   57572 out.go:204]   - Booting up control plane ...
	I0610 11:53:23.225803   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:53:23.225898   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:53:23.226004   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:53:23.245138   57572 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:53:23.246060   57572 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:53:23.246121   57572 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:53:23.375562   57572 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 11:53:23.375689   57572 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 11:53:23.877472   57572 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.888048ms
	I0610 11:53:23.877560   57572 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 11:53:25.185274   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:28.879976   57572 kubeadm.go:309] [api-check] The API server is healthy after 5.002334008s
	I0610 11:53:28.902382   57572 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 11:53:28.924552   57572 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 11:53:28.956686   57572 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 11:53:28.956958   57572 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-298179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 11:53:28.971883   57572 kubeadm.go:309] [bootstrap-token] Using token: zdzp8m.ttyzgfzbws24vbk8
	I0610 11:53:24.916641   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:24.916824   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:24.980737   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:24.980779   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:24.998139   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:24.998163   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:25.113809   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:25.113839   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:25.168214   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:25.168254   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:27.708296   56769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:53:27.730996   56769 api_server.go:72] duration metric: took 4m14.155149231s to wait for apiserver process to appear ...
	I0610 11:53:27.731021   56769 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:53:27.731057   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:27.731116   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:27.767385   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:27.767411   56769 cri.go:89] found id: ""
	I0610 11:53:27.767420   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:27.767465   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.771646   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:27.771723   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:27.806969   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:27.806996   56769 cri.go:89] found id: ""
	I0610 11:53:27.807005   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:27.807060   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.811580   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:27.811655   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:27.850853   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:27.850879   56769 cri.go:89] found id: ""
	I0610 11:53:27.850888   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:27.850947   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.855284   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:27.855347   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:27.901228   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:27.901256   56769 cri.go:89] found id: ""
	I0610 11:53:27.901266   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:27.901322   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.905361   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:27.905428   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:27.943162   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:27.943187   56769 cri.go:89] found id: ""
	I0610 11:53:27.943197   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:27.943251   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.951934   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:27.952015   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:27.996288   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:27.996316   56769 cri.go:89] found id: ""
	I0610 11:53:27.996325   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:27.996381   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.000307   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:28.000378   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:28.036978   56769 cri.go:89] found id: ""
	I0610 11:53:28.037016   56769 logs.go:276] 0 containers: []
	W0610 11:53:28.037026   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:28.037033   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:28.037091   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:28.078338   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:28.078363   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:28.078368   56769 cri.go:89] found id: ""
	I0610 11:53:28.078377   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:28.078433   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.082899   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.087382   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:28.087416   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:28.123014   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:28.123051   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:28.186128   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:28.186160   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:28.314495   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:28.314539   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:28.358953   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:28.358981   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:28.394280   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:28.394306   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:28.450138   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:28.450172   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:28.851268   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:28.851307   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:28.909176   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:28.909202   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:28.927322   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:28.927359   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:28.983941   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:28.983971   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:29.023327   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:29.023352   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:29.063624   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:29.063655   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:28.973316   57572 out.go:204]   - Configuring RBAC rules ...
	I0610 11:53:28.973437   57572 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 11:53:28.979726   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 11:53:28.989075   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 11:53:28.999678   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 11:53:29.005717   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 11:53:29.014439   57572 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 11:53:29.292088   57572 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 11:53:29.734969   57572 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 11:53:30.288723   57572 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 11:53:30.289824   57572 kubeadm.go:309] 
	I0610 11:53:30.289918   57572 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 11:53:30.289930   57572 kubeadm.go:309] 
	I0610 11:53:30.290061   57572 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 11:53:30.290078   57572 kubeadm.go:309] 
	I0610 11:53:30.290107   57572 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 11:53:30.290191   57572 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 11:53:30.290268   57572 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 11:53:30.290316   57572 kubeadm.go:309] 
	I0610 11:53:30.290402   57572 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 11:53:30.290412   57572 kubeadm.go:309] 
	I0610 11:53:30.290481   57572 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 11:53:30.290494   57572 kubeadm.go:309] 
	I0610 11:53:30.290539   57572 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 11:53:30.290602   57572 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 11:53:30.290659   57572 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 11:53:30.290666   57572 kubeadm.go:309] 
	I0610 11:53:30.290749   57572 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 11:53:30.290816   57572 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 11:53:30.290823   57572 kubeadm.go:309] 
	I0610 11:53:30.290901   57572 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zdzp8m.ttyzgfzbws24vbk8 \
	I0610 11:53:30.291011   57572 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 11:53:30.291032   57572 kubeadm.go:309] 	--control-plane 
	I0610 11:53:30.291038   57572 kubeadm.go:309] 
	I0610 11:53:30.291113   57572 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 11:53:30.291120   57572 kubeadm.go:309] 
	I0610 11:53:30.291230   57572 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zdzp8m.ttyzgfzbws24vbk8 \
	I0610 11:53:30.291370   57572 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 11:53:30.291895   57572 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:53:30.291925   57572 cni.go:84] Creating CNI manager for ""
	I0610 11:53:30.291936   57572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:53:30.294227   57572 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 11:53:30.295470   57572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 11:53:30.306011   57572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 11:53:30.322832   57572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 11:53:30.322890   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:30.322960   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-298179 minikube.k8s.io/updated_at=2024_06_10T11_53_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=no-preload-298179 minikube.k8s.io/primary=true
	I0610 11:53:30.486915   57572 ops.go:34] apiserver oom_adj: -16
	I0610 11:53:30.487320   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:30.988103   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.488094   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.988314   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:32.487603   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.265182   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:31.597111   56769 api_server.go:253] Checking apiserver healthz at https://192.168.61.19:8443/healthz ...
	I0610 11:53:31.601589   56769 api_server.go:279] https://192.168.61.19:8443/healthz returned 200:
	ok
	I0610 11:53:31.602609   56769 api_server.go:141] control plane version: v1.30.1
	I0610 11:53:31.602631   56769 api_server.go:131] duration metric: took 3.871604169s to wait for apiserver health ...
	I0610 11:53:31.602639   56769 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:53:31.602663   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:31.602716   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:31.650102   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:31.650130   56769 cri.go:89] found id: ""
	I0610 11:53:31.650139   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:31.650197   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.654234   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:31.654299   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:31.690704   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:31.690736   56769 cri.go:89] found id: ""
	I0610 11:53:31.690750   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:31.690810   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.695139   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:31.695209   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:31.732593   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:31.732614   56769 cri.go:89] found id: ""
	I0610 11:53:31.732621   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:31.732667   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.737201   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:31.737277   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:31.774177   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:31.774219   56769 cri.go:89] found id: ""
	I0610 11:53:31.774239   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:31.774300   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.778617   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:31.778695   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:31.816633   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:31.816657   56769 cri.go:89] found id: ""
	I0610 11:53:31.816665   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:31.816715   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.820846   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:31.820928   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:31.857021   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:31.857052   56769 cri.go:89] found id: ""
	I0610 11:53:31.857062   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:31.857127   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.862825   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:31.862888   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:31.903792   56769 cri.go:89] found id: ""
	I0610 11:53:31.903817   56769 logs.go:276] 0 containers: []
	W0610 11:53:31.903825   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:31.903837   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:31.903885   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:31.942392   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:31.942414   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:31.942419   56769 cri.go:89] found id: ""
	I0610 11:53:31.942428   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:31.942481   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.949047   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.953590   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:31.953625   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:31.991926   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:31.991954   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:32.040857   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:32.040894   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:32.432680   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:32.432731   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:32.474819   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:32.474849   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:32.530152   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:32.530189   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:32.547698   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:32.547735   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:32.598580   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:32.598634   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:32.643864   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:32.643900   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:32.679085   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:32.679118   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:32.714247   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:32.714279   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:32.818508   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:32.818551   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:32.862390   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:32.862424   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:35.408169   56769 system_pods.go:59] 8 kube-system pods found
	I0610 11:53:35.408198   56769 system_pods.go:61] "coredns-7db6d8ff4d-7dlzb" [4b2618cd-b48c-44bd-a07d-4fe4585a14fa] Running
	I0610 11:53:35.408203   56769 system_pods.go:61] "etcd-embed-certs-832735" [4b7d413d-9a2a-4677-b279-5a6d39904679] Running
	I0610 11:53:35.408208   56769 system_pods.go:61] "kube-apiserver-embed-certs-832735" [7e11e03e-7b15-4e9b-8f9a-9a46d7aadd7e] Running
	I0610 11:53:35.408211   56769 system_pods.go:61] "kube-controller-manager-embed-certs-832735" [75aa996d-fdf3-4c32-b25d-03c7582b3502] Running
	I0610 11:53:35.408215   56769 system_pods.go:61] "kube-proxy-b7x2p" [fe1cd055-691f-46b1-ada7-7dded31d2308] Running
	I0610 11:53:35.408218   56769 system_pods.go:61] "kube-scheduler-embed-certs-832735" [b7a7fcfb-7ce9-4470-9052-79bc13029408] Running
	I0610 11:53:35.408223   56769 system_pods.go:61] "metrics-server-569cc877fc-5zg8j" [e979b4b0-356d-479d-990f-d9e6e46a1a9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:35.408233   56769 system_pods.go:61] "storage-provisioner" [47aa143e-3545-492d-ac93-e62f0076e0f4] Running
	I0610 11:53:35.408241   56769 system_pods.go:74] duration metric: took 3.805596332s to wait for pod list to return data ...
	I0610 11:53:35.408248   56769 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:53:35.410634   56769 default_sa.go:45] found service account: "default"
	I0610 11:53:35.410659   56769 default_sa.go:55] duration metric: took 2.405735ms for default service account to be created ...
	I0610 11:53:35.410667   56769 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:53:35.415849   56769 system_pods.go:86] 8 kube-system pods found
	I0610 11:53:35.415871   56769 system_pods.go:89] "coredns-7db6d8ff4d-7dlzb" [4b2618cd-b48c-44bd-a07d-4fe4585a14fa] Running
	I0610 11:53:35.415876   56769 system_pods.go:89] "etcd-embed-certs-832735" [4b7d413d-9a2a-4677-b279-5a6d39904679] Running
	I0610 11:53:35.415881   56769 system_pods.go:89] "kube-apiserver-embed-certs-832735" [7e11e03e-7b15-4e9b-8f9a-9a46d7aadd7e] Running
	I0610 11:53:35.415886   56769 system_pods.go:89] "kube-controller-manager-embed-certs-832735" [75aa996d-fdf3-4c32-b25d-03c7582b3502] Running
	I0610 11:53:35.415890   56769 system_pods.go:89] "kube-proxy-b7x2p" [fe1cd055-691f-46b1-ada7-7dded31d2308] Running
	I0610 11:53:35.415894   56769 system_pods.go:89] "kube-scheduler-embed-certs-832735" [b7a7fcfb-7ce9-4470-9052-79bc13029408] Running
	I0610 11:53:35.415900   56769 system_pods.go:89] "metrics-server-569cc877fc-5zg8j" [e979b4b0-356d-479d-990f-d9e6e46a1a9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:35.415906   56769 system_pods.go:89] "storage-provisioner" [47aa143e-3545-492d-ac93-e62f0076e0f4] Running
	I0610 11:53:35.415913   56769 system_pods.go:126] duration metric: took 5.241641ms to wait for k8s-apps to be running ...
	I0610 11:53:35.415919   56769 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:53:35.415957   56769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:35.431179   56769 system_svc.go:56] duration metric: took 15.252123ms WaitForService to wait for kubelet
	I0610 11:53:35.431209   56769 kubeadm.go:576] duration metric: took 4m21.85536785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:53:35.431233   56769 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:53:35.433918   56769 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:53:35.433941   56769 node_conditions.go:123] node cpu capacity is 2
	I0610 11:53:35.433955   56769 node_conditions.go:105] duration metric: took 2.718538ms to run NodePressure ...
	I0610 11:53:35.433966   56769 start.go:240] waiting for startup goroutines ...
	I0610 11:53:35.433973   56769 start.go:245] waiting for cluster config update ...
	I0610 11:53:35.433982   56769 start.go:254] writing updated cluster config ...
	I0610 11:53:35.434234   56769 ssh_runner.go:195] Run: rm -f paused
	I0610 11:53:35.483552   56769 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:53:35.485459   56769 out.go:177] * Done! kubectl is now configured to use "embed-certs-832735" cluster and "default" namespace by default
	I0610 11:53:34.892890   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:53:34.893019   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:34.893195   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:32.987749   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:33.488008   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:33.988419   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.488002   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.988349   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:35.487347   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:35.987479   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:36.487972   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:36.987442   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:37.488069   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.337236   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:39.893441   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:39.893640   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:37.987751   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:38.488215   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:38.987955   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:39.487394   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:39.987431   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:40.488304   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:40.987779   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:41.488123   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:41.987438   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:42.487799   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:42.987548   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:43.084050   57572 kubeadm.go:1107] duration metric: took 12.761214532s to wait for elevateKubeSystemPrivileges
	W0610 11:53:43.084095   57572 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 11:53:43.084109   57572 kubeadm.go:393] duration metric: took 5m9.100565129s to StartCluster
	I0610 11:53:43.084128   57572 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:53:43.084215   57572 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:53:43.085889   57572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:53:43.086151   57572 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 11:53:43.087762   57572 out.go:177] * Verifying Kubernetes components...
	I0610 11:53:43.086215   57572 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 11:53:43.087796   57572 addons.go:69] Setting storage-provisioner=true in profile "no-preload-298179"
	I0610 11:53:43.087800   57572 addons.go:69] Setting default-storageclass=true in profile "no-preload-298179"
	I0610 11:53:43.087819   57572 addons.go:234] Setting addon storage-provisioner=true in "no-preload-298179"
	W0610 11:53:43.087825   57572 addons.go:243] addon storage-provisioner should already be in state true
	I0610 11:53:43.087832   57572 addons.go:69] Setting metrics-server=true in profile "no-preload-298179"
	I0610 11:53:43.087847   57572 addons.go:234] Setting addon metrics-server=true in "no-preload-298179"
	W0610 11:53:43.087856   57572 addons.go:243] addon metrics-server should already be in state true
	I0610 11:53:43.087826   57572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-298179"
	I0610 11:53:43.087878   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.089535   57572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:53:43.087856   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.086356   57572 config.go:182] Loaded profile config "no-preload-298179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:53:43.088180   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.088182   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.089687   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.089713   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.089869   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.089895   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.104587   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0610 11:53:43.104609   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0610 11:53:43.104586   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34031
	I0610 11:53:43.105501   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105566   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105508   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105983   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.105997   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106134   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.106153   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106160   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.106184   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106350   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106526   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106568   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106692   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.106890   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.106918   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.107118   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.107141   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.109645   57572 addons.go:234] Setting addon default-storageclass=true in "no-preload-298179"
	W0610 11:53:43.109664   57572 addons.go:243] addon default-storageclass should already be in state true
	I0610 11:53:43.109692   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.109914   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.109939   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.123209   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0610 11:53:43.123703   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.124011   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0610 11:53:43.124351   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.124372   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.124393   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.124777   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.124847   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.124869   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.124998   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.125208   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.125941   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.125994   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.126208   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35175
	I0610 11:53:43.126555   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.126915   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.127030   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.127038   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.129007   57572 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0610 11:53:43.127369   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.130329   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 11:53:43.130349   57572 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 11:53:43.130372   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.130501   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.132699   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.134359   57572 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:53:40.417218   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:43.489341   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:43.135801   57572 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:53:43.135818   57572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 11:53:43.135837   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.134045   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.135918   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.135948   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.134772   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.136159   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.136318   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.136621   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.139217   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.139636   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.139658   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.140091   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.140568   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.140865   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.141293   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.145179   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0610 11:53:43.145813   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.146336   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.146358   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.146675   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.146987   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.148747   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.149026   57572 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 11:53:43.149042   57572 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 11:53:43.149064   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.152048   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.152550   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.152572   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.152780   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.153021   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.153256   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.153406   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.293079   57572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:53:43.323699   57572 node_ready.go:35] waiting up to 6m0s for node "no-preload-298179" to be "Ready" ...
	I0610 11:53:43.331922   57572 node_ready.go:49] node "no-preload-298179" has status "Ready":"True"
	I0610 11:53:43.331946   57572 node_ready.go:38] duration metric: took 8.212434ms for node "no-preload-298179" to be "Ready" ...
	I0610 11:53:43.331956   57572 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:43.338721   57572 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:43.399175   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 11:53:43.399196   57572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0610 11:53:43.432920   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 11:53:43.432986   57572 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 11:53:43.453982   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:53:43.457146   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 11:53:43.500871   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 11:53:43.500900   57572 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 11:53:43.601303   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 11:53:44.376916   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.376992   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377083   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377105   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377298   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377377   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.377383   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.377301   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377394   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377403   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377405   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.377414   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377421   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377608   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377634   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.379039   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.379090   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.379054   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.397328   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.397354   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.397626   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.397644   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.880094   57572 pod_ready.go:92] pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.880129   57572 pod_ready.go:81] duration metric: took 1.541384627s for pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.880149   57572 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.901625   57572 pod_ready.go:92] pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.901649   57572 pod_ready.go:81] duration metric: took 21.492207ms for pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.901658   57572 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.907530   57572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.306184796s)
	I0610 11:53:44.907587   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.907603   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.907929   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.907991   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.908005   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.908015   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.908262   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.908301   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.908305   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.908315   57572 addons.go:475] Verifying addon metrics-server=true in "no-preload-298179"
	I0610 11:53:44.910622   57572 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0610 11:53:44.911848   57572 addons.go:510] duration metric: took 1.825630817s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0610 11:53:44.922534   57572 pod_ready.go:92] pod "etcd-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.922562   57572 pod_ready.go:81] duration metric: took 20.896794ms for pod "etcd-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.922576   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.947545   57572 pod_ready.go:92] pod "kube-apiserver-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.947569   57572 pod_ready.go:81] duration metric: took 24.984822ms for pod "kube-apiserver-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.947578   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.956216   57572 pod_ready.go:92] pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.956240   57572 pod_ready.go:81] duration metric: took 8.656291ms for pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.956256   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fhndh" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.326936   57572 pod_ready.go:92] pod "kube-proxy-fhndh" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:45.326977   57572 pod_ready.go:81] duration metric: took 370.713967ms for pod "kube-proxy-fhndh" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.326987   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.733487   57572 pod_ready.go:92] pod "kube-scheduler-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:45.733514   57572 pod_ready.go:81] duration metric: took 406.51925ms for pod "kube-scheduler-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.733525   57572 pod_ready.go:38] duration metric: took 2.401559014s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:45.733544   57572 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:53:45.733612   57572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:53:45.754814   57572 api_server.go:72] duration metric: took 2.668628419s to wait for apiserver process to appear ...
	I0610 11:53:45.754838   57572 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:53:45.754867   57572 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I0610 11:53:45.763742   57572 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I0610 11:53:45.765314   57572 api_server.go:141] control plane version: v1.30.1
	I0610 11:53:45.765345   57572 api_server.go:131] duration metric: took 10.498726ms to wait for apiserver health ...
	I0610 11:53:45.765356   57572 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:53:45.930764   57572 system_pods.go:59] 9 kube-system pods found
	I0610 11:53:45.930792   57572 system_pods.go:61] "coredns-7db6d8ff4d-9mqrm" [6269d670-dffa-4526-8117-0b44df04554a] Running
	I0610 11:53:45.930796   57572 system_pods.go:61] "coredns-7db6d8ff4d-f622z" [16cb4de3-afa9-4e45-bc85-e51273973808] Running
	I0610 11:53:45.930800   57572 system_pods.go:61] "etcd-no-preload-298179" [088f1950-04c4-49e0-b3e2-fe8b5f398a08] Running
	I0610 11:53:45.930806   57572 system_pods.go:61] "kube-apiserver-no-preload-298179" [11bad142-25ff-4aa9-9d9e-4b7cbb053bdd] Running
	I0610 11:53:45.930810   57572 system_pods.go:61] "kube-controller-manager-no-preload-298179" [ac29a4d9-6e9c-44fd-bb39-477255b94d0c] Running
	I0610 11:53:45.930814   57572 system_pods.go:61] "kube-proxy-fhndh" [50f848e7-44f6-4ab1-bf94-3189733abca2] Running
	I0610 11:53:45.930818   57572 system_pods.go:61] "kube-scheduler-no-preload-298179" [8569c375-b9bd-4a46-91ea-c6372056e45d] Running
	I0610 11:53:45.930826   57572 system_pods.go:61] "metrics-server-569cc877fc-jp7dr" [21136ae9-40d8-4857-aca5-47e3fa3b7e9c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:45.930831   57572 system_pods.go:61] "storage-provisioner" [783f523c-4c21-4ae0-bc18-9c391e7342b0] Running
	I0610 11:53:45.930843   57572 system_pods.go:74] duration metric: took 165.479385ms to wait for pod list to return data ...
	I0610 11:53:45.930855   57572 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:53:46.127109   57572 default_sa.go:45] found service account: "default"
	I0610 11:53:46.127145   57572 default_sa.go:55] duration metric: took 196.279685ms for default service account to be created ...
	I0610 11:53:46.127154   57572 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:53:46.330560   57572 system_pods.go:86] 9 kube-system pods found
	I0610 11:53:46.330587   57572 system_pods.go:89] "coredns-7db6d8ff4d-9mqrm" [6269d670-dffa-4526-8117-0b44df04554a] Running
	I0610 11:53:46.330592   57572 system_pods.go:89] "coredns-7db6d8ff4d-f622z" [16cb4de3-afa9-4e45-bc85-e51273973808] Running
	I0610 11:53:46.330597   57572 system_pods.go:89] "etcd-no-preload-298179" [088f1950-04c4-49e0-b3e2-fe8b5f398a08] Running
	I0610 11:53:46.330601   57572 system_pods.go:89] "kube-apiserver-no-preload-298179" [11bad142-25ff-4aa9-9d9e-4b7cbb053bdd] Running
	I0610 11:53:46.330605   57572 system_pods.go:89] "kube-controller-manager-no-preload-298179" [ac29a4d9-6e9c-44fd-bb39-477255b94d0c] Running
	I0610 11:53:46.330608   57572 system_pods.go:89] "kube-proxy-fhndh" [50f848e7-44f6-4ab1-bf94-3189733abca2] Running
	I0610 11:53:46.330612   57572 system_pods.go:89] "kube-scheduler-no-preload-298179" [8569c375-b9bd-4a46-91ea-c6372056e45d] Running
	I0610 11:53:46.330619   57572 system_pods.go:89] "metrics-server-569cc877fc-jp7dr" [21136ae9-40d8-4857-aca5-47e3fa3b7e9c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:46.330623   57572 system_pods.go:89] "storage-provisioner" [783f523c-4c21-4ae0-bc18-9c391e7342b0] Running
	I0610 11:53:46.330631   57572 system_pods.go:126] duration metric: took 203.472984ms to wait for k8s-apps to be running ...
	I0610 11:53:46.330640   57572 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:53:46.330681   57572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:46.345084   57572 system_svc.go:56] duration metric: took 14.432966ms WaitForService to wait for kubelet
	I0610 11:53:46.345113   57572 kubeadm.go:576] duration metric: took 3.258932349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:53:46.345131   57572 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:53:46.528236   57572 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:53:46.528269   57572 node_conditions.go:123] node cpu capacity is 2
	I0610 11:53:46.528278   57572 node_conditions.go:105] duration metric: took 183.142711ms to run NodePressure ...
	I0610 11:53:46.528288   57572 start.go:240] waiting for startup goroutines ...
	I0610 11:53:46.528294   57572 start.go:245] waiting for cluster config update ...
	I0610 11:53:46.528303   57572 start.go:254] writing updated cluster config ...
	I0610 11:53:46.528561   57572 ssh_runner.go:195] Run: rm -f paused
	I0610 11:53:46.576348   57572 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:53:46.578434   57572 out.go:177] * Done! kubectl is now configured to use "no-preload-298179" cluster and "default" namespace by default
	I0610 11:53:49.894176   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:49.894368   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:49.573292   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:52.641233   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:58.721260   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:01.793270   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:07.873263   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:09.895012   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:09.895413   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:10.945237   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:17.025183   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:20.097196   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:26.177217   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:29.249267   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:35.329193   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:38.401234   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:44.481254   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:47.553200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:49.896623   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:49.896849   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:49.896868   57945 kubeadm.go:309] 
	I0610 11:54:49.896922   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:54:49.897030   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:54:49.897053   57945 kubeadm.go:309] 
	I0610 11:54:49.897121   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:54:49.897157   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:54:49.897308   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:54:49.897322   57945 kubeadm.go:309] 
	I0610 11:54:49.897493   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:54:49.897553   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:54:49.897612   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:54:49.897623   57945 kubeadm.go:309] 
	I0610 11:54:49.897755   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:54:49.897866   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:54:49.897876   57945 kubeadm.go:309] 
	I0610 11:54:49.898032   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:54:49.898139   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:54:49.898253   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:54:49.898357   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:54:49.898365   57945 kubeadm.go:309] 
	I0610 11:54:49.899094   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:54:49.899208   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:54:49.899302   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0610 11:54:49.899441   57945 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0610 11:54:49.899502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:54:50.366528   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:54:50.380107   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:54:50.390067   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:54:50.390089   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:54:50.390132   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:54:50.399159   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:54:50.399222   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:54:50.409346   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:54:50.420402   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:54:50.420458   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:54:50.432874   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.444351   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:54:50.444430   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.458175   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:54:50.468538   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:54:50.468611   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:54:50.480033   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:54:50.543600   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:54:50.543653   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:54:50.682810   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:54:50.682970   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:54:50.683117   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:54:50.877761   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:54:50.879686   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:54:50.879788   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:54:50.879881   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:54:50.880010   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:54:50.880075   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:54:50.880145   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:54:50.880235   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:54:50.880334   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:54:50.880543   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:54:50.880654   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:54:50.880771   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:54:50.880835   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:54:50.880912   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:54:51.326073   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:54:51.537409   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:54:51.721400   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:54:51.884882   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:54:51.904377   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:54:51.906470   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:54:51.906560   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:54:52.065800   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:54:52.067657   57945 out.go:204]   - Booting up control plane ...
	I0610 11:54:52.067807   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:54:52.069012   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:54:52.070508   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:54:52.071669   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:54:52.074772   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:54:53.633176   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:56.705245   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:02.785227   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:05.857320   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:11.941172   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:15.009275   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:21.089235   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:24.161264   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:32.077145   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:55:32.077542   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:32.077740   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:30.241187   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:33.313200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:37.078114   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:37.078357   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:39.393317   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:42.465223   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:47.078706   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:47.078906   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:48.545281   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:51.617229   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:57.697600   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:00.769294   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:07.079053   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:07.079285   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:06.849261   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:09.925249   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:16.001299   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:19.077309   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:25.153200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:28.225172   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:31.226848   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:56:31.226888   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:31.227225   60146 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281114"
	I0610 11:56:31.227250   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:31.227458   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:31.229187   60146 machine.go:97] duration metric: took 4m37.416418256s to provisionDockerMachine
	I0610 11:56:31.229224   60146 fix.go:56] duration metric: took 4m37.441343871s for fixHost
	I0610 11:56:31.229230   60146 start.go:83] releasing machines lock for "default-k8s-diff-port-281114", held for 4m37.44136358s
	W0610 11:56:31.229245   60146 start.go:713] error starting host: provision: host is not running
	W0610 11:56:31.229314   60146 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0610 11:56:31.229325   60146 start.go:728] Will try again in 5 seconds ...
	I0610 11:56:36.230954   60146 start.go:360] acquireMachinesLock for default-k8s-diff-port-281114: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:56:36.231068   60146 start.go:364] duration metric: took 60.465µs to acquireMachinesLock for "default-k8s-diff-port-281114"
	I0610 11:56:36.231091   60146 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:56:36.231096   60146 fix.go:54] fixHost starting: 
	I0610 11:56:36.231372   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:56:36.231392   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:56:36.247286   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0610 11:56:36.247715   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:56:36.248272   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:56:36.248292   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:56:36.248585   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:56:36.248787   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:36.248939   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 11:56:36.250776   60146 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281114: state=Stopped err=<nil>
	I0610 11:56:36.250796   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	W0610 11:56:36.250950   60146 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:56:36.252942   60146 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-281114" ...
	I0610 11:56:36.254300   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Start
	I0610 11:56:36.254515   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring networks are active...
	I0610 11:56:36.255281   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring network default is active
	I0610 11:56:36.255626   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring network mk-default-k8s-diff-port-281114 is active
	I0610 11:56:36.256059   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Getting domain xml...
	I0610 11:56:36.256819   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Creating domain...
	I0610 11:56:37.521102   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting to get IP...
	I0610 11:56:37.522061   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.522494   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.522553   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:37.522473   61276 retry.go:31] will retry after 220.098219ms: waiting for machine to come up
	I0610 11:56:37.743932   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.744482   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.744513   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:37.744440   61276 retry.go:31] will retry after 292.471184ms: waiting for machine to come up
	I0610 11:56:38.038937   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.039497   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.039526   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:38.039454   61276 retry.go:31] will retry after 446.869846ms: waiting for machine to come up
	I0610 11:56:38.488091   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.488684   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.488708   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:38.488635   61276 retry.go:31] will retry after 607.787706ms: waiting for machine to come up
	I0610 11:56:39.098375   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.098845   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.098875   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:39.098795   61276 retry.go:31] will retry after 610.636143ms: waiting for machine to come up
	I0610 11:56:39.710692   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.711170   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.711198   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:39.711106   61276 retry.go:31] will retry after 598.132053ms: waiting for machine to come up
	I0610 11:56:40.310889   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:40.311397   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:40.311420   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:40.311328   61276 retry.go:31] will retry after 1.191704846s: waiting for machine to come up
	I0610 11:56:41.505131   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:41.505601   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:41.505631   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:41.505572   61276 retry.go:31] will retry after 937.081207ms: waiting for machine to come up
	I0610 11:56:42.444793   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:42.445368   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:42.445396   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:42.445338   61276 retry.go:31] will retry after 1.721662133s: waiting for machine to come up
	I0610 11:56:47.078993   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:47.079439   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:47.079463   57945 kubeadm.go:309] 
	I0610 11:56:47.079513   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:56:47.079597   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:56:47.079629   57945 kubeadm.go:309] 
	I0610 11:56:47.079678   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:56:47.079718   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:56:47.079865   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:56:47.079876   57945 kubeadm.go:309] 
	I0610 11:56:47.080014   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:56:47.080077   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:56:47.080132   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:56:47.080151   57945 kubeadm.go:309] 
	I0610 11:56:47.080280   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:56:47.080377   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:56:47.080389   57945 kubeadm.go:309] 
	I0610 11:56:47.080543   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:56:47.080663   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:56:47.080769   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:56:47.080862   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:56:47.080874   57945 kubeadm.go:309] 
	I0610 11:56:47.081877   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:56:47.082023   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:56:47.082137   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0610 11:56:47.082233   57945 kubeadm.go:393] duration metric: took 8m2.423366884s to StartCluster
	I0610 11:56:47.082273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:56:47.082325   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:56:47.130548   57945 cri.go:89] found id: ""
	I0610 11:56:47.130585   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.130596   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:56:47.130603   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:56:47.130673   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:56:47.170087   57945 cri.go:89] found id: ""
	I0610 11:56:47.170124   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.170136   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:56:47.170144   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:56:47.170219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:56:47.210394   57945 cri.go:89] found id: ""
	I0610 11:56:47.210430   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.210442   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:56:47.210450   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:56:47.210532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:56:47.246002   57945 cri.go:89] found id: ""
	I0610 11:56:47.246032   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.246043   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:56:47.246051   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:56:47.246119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:56:47.282333   57945 cri.go:89] found id: ""
	I0610 11:56:47.282361   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.282369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:56:47.282375   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:56:47.282432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:56:47.316205   57945 cri.go:89] found id: ""
	I0610 11:56:47.316241   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.316254   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:56:47.316262   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:56:47.316323   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:56:47.356012   57945 cri.go:89] found id: ""
	I0610 11:56:47.356047   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.356060   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:56:47.356069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:56:47.356140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:56:47.404624   57945 cri.go:89] found id: ""
	I0610 11:56:47.404655   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.404666   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:56:47.404678   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:56:47.404694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:56:47.475236   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:56:47.475282   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:56:47.493382   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:56:47.493418   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:56:47.589894   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:56:47.589918   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:56:47.589934   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:56:47.726080   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:56:47.726123   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0610 11:56:47.770399   57945 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0610 11:56:47.770451   57945 out.go:239] * 
	W0610 11:56:47.770532   57945 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.770558   57945 out.go:239] * 
	W0610 11:56:47.771459   57945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:56:47.775172   57945 out.go:177] 
	W0610 11:56:47.776444   57945 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.776509   57945 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0610 11:56:47.776545   57945 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0610 11:56:47.778306   57945 out.go:177] 
	
	
	==> CRI-O <==
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.828204328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020608828182429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=caa88de5-cce6-47b7-93ab-b04ce0c8d057 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.828624785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=291e0d01-9f21-46c7-ba7d-fea64014ec19 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.828688144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=291e0d01-9f21-46c7-ba7d-fea64014ec19 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.828755862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=291e0d01-9f21-46c7-ba7d-fea64014ec19 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.858876211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd4e6465-8eff-46dd-a037-ab46467fea30 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.858964566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd4e6465-8eff-46dd-a037-ab46467fea30 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.860571566Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9f832a8e-49b5-4c59-8420-0a8de2b68014 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.861159297Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020608861126173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9f832a8e-49b5-4c59-8420-0a8de2b68014 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.861933147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f841c27e-3c0e-4ac5-b5d0-f69f57d143fb name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.862062425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f841c27e-3c0e-4ac5-b5d0-f69f57d143fb name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.862111502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f841c27e-3c0e-4ac5-b5d0-f69f57d143fb name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.894868236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ae9a756-e61f-49f5-b999-7c1ca087dbb4 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.894969579Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ae9a756-e61f-49f5-b999-7c1ca087dbb4 name=/runtime.v1.RuntimeService/Version
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.896230103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9b7aced-74de-4e17-8b07-e638608f080c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.896615206Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020608896592177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9b7aced-74de-4e17-8b07-e638608f080c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.897158233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e4b8926-52dd-45df-9c63-ab56988dbd41 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.897221029Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e4b8926-52dd-45df-9c63-ab56988dbd41 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.897264323Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1e4b8926-52dd-45df-9c63-ab56988dbd41 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.929369365Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf5b0064-7739-41b3-9b1a-df8355044b7a name=/runtime.v1.RuntimeService/Version
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.929439138Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf5b0064-7739-41b3-9b1a-df8355044b7a name=/runtime.v1.RuntimeService/Version
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.930570647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eab8ae5a-8be0-4454-a580-340e9faab78e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.931019939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020608930991802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eab8ae5a-8be0-4454-a580-340e9faab78e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.931462540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=addc1f14-a4a7-4df8-8599-757e8167f186 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.931517690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=addc1f14-a4a7-4df8-8599-757e8167f186 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 11:56:48 old-k8s-version-166693 crio[645]: time="2024-06-10 11:56:48.931551302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=addc1f14-a4a7-4df8-8599-757e8167f186 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun10 11:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052778] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039241] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662307] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.954746] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609904] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.687001] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.069246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073631] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.221904] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.142650] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.284629] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.510984] systemd-fstab-generator[829]: Ignoring "noauto" option for root device
	[  +0.065299] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.018208] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[ +11.261041] kauditd_printk_skb: 46 callbacks suppressed
	[Jun10 11:52] systemd-fstab-generator[5086]: Ignoring "noauto" option for root device
	[Jun10 11:54] systemd-fstab-generator[5370]: Ignoring "noauto" option for root device
	[  +0.069423] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 11:56:49 up 8 min,  0 users,  load average: 0.00, 0.07, 0.04
	Linux old-k8s-version-166693 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001040c0, 0xc000a5fa70)
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]: goroutine 152 [select]:
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b03ef0, 0x4f0ac20, 0xc000103b30, 0x1, 0xc0001040c0)
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0009c22a0, 0xc0001040c0)
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a532c0, 0xc0009155e0)
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jun 10 11:56:46 old-k8s-version-166693 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jun 10 11:56:46 old-k8s-version-166693 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 10 11:56:46 old-k8s-version-166693 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 10 11:56:47 old-k8s-version-166693 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Jun 10 11:56:47 old-k8s-version-166693 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 10 11:56:47 old-k8s-version-166693 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 10 11:56:47 old-k8s-version-166693 kubelet[5607]: I0610 11:56:47.707364    5607 server.go:416] Version: v1.20.0
	Jun 10 11:56:47 old-k8s-version-166693 kubelet[5607]: I0610 11:56:47.707858    5607 server.go:837] Client rotation is on, will bootstrap in background
	Jun 10 11:56:47 old-k8s-version-166693 kubelet[5607]: I0610 11:56:47.710704    5607 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 10 11:56:47 old-k8s-version-166693 kubelet[5607]: W0610 11:56:47.714359    5607 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 10 11:56:47 old-k8s-version-166693 kubelet[5607]: I0610 11:56:47.716833    5607 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166693 -n old-k8s-version-166693
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 2 (238.97138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-166693" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (697.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-281114 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-281114 --alsologtostderr -v=3: exit status 82 (2m0.509002625s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-281114"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 11:49:22.279564   59496 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:49:22.279817   59496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:49:22.279827   59496 out.go:304] Setting ErrFile to fd 2...
	I0610 11:49:22.279834   59496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:49:22.280040   59496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:49:22.280284   59496 out.go:298] Setting JSON to false
	I0610 11:49:22.280372   59496 mustload.go:65] Loading cluster: default-k8s-diff-port-281114
	I0610 11:49:22.280678   59496 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:49:22.280758   59496 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/config.json ...
	I0610 11:49:22.280935   59496 mustload.go:65] Loading cluster: default-k8s-diff-port-281114
	I0610 11:49:22.281086   59496 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:49:22.281121   59496 stop.go:39] StopHost: default-k8s-diff-port-281114
	I0610 11:49:22.281527   59496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:49:22.281586   59496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:49:22.297470   59496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43683
	I0610 11:49:22.298074   59496 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:49:22.298592   59496 main.go:141] libmachine: Using API Version  1
	I0610 11:49:22.298620   59496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:49:22.299087   59496 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:49:22.301585   59496 out.go:177] * Stopping node "default-k8s-diff-port-281114"  ...
	I0610 11:49:22.302788   59496 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0610 11:49:22.302823   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:49:22.303090   59496 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0610 11:49:22.303120   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:49:22.306073   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:49:22.306480   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:47:50 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:49:22.306506   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:49:22.306712   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:49:22.306858   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:49:22.306999   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:49:22.307168   59496 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:49:22.401117   59496 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0610 11:49:22.466668   59496 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0610 11:49:22.533485   59496 main.go:141] libmachine: Stopping "default-k8s-diff-port-281114"...
	I0610 11:49:22.533535   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 11:49:22.535085   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Stop
	I0610 11:49:22.538757   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 0/120
	I0610 11:49:23.540233   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 1/120
	I0610 11:49:24.541675   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 2/120
	I0610 11:49:25.543147   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 3/120
	I0610 11:49:26.544323   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 4/120
	I0610 11:49:27.546148   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 5/120
	I0610 11:49:28.547457   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 6/120
	I0610 11:49:29.549691   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 7/120
	I0610 11:49:30.551135   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 8/120
	I0610 11:49:31.552699   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 9/120
	I0610 11:49:32.554866   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 10/120
	I0610 11:49:33.556309   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 11/120
	I0610 11:49:34.557716   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 12/120
	I0610 11:49:35.559016   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 13/120
	I0610 11:49:36.560544   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 14/120
	I0610 11:49:37.561810   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 15/120
	I0610 11:49:38.563495   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 16/120
	I0610 11:49:39.565169   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 17/120
	I0610 11:49:40.566402   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 18/120
	I0610 11:49:41.567832   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 19/120
	I0610 11:49:42.570323   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 20/120
	I0610 11:49:43.571949   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 21/120
	I0610 11:49:44.573674   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 22/120
	I0610 11:49:45.575762   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 23/120
	I0610 11:49:46.577485   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 24/120
	I0610 11:49:47.579162   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 25/120
	I0610 11:49:48.581030   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 26/120
	I0610 11:49:49.582610   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 27/120
	I0610 11:49:50.584509   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 28/120
	I0610 11:49:51.586011   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 29/120
	I0610 11:49:52.587749   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 30/120
	I0610 11:49:53.589198   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 31/120
	I0610 11:49:54.590555   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 32/120
	I0610 11:49:55.592085   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 33/120
	I0610 11:49:56.593676   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 34/120
	I0610 11:49:57.595479   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 35/120
	I0610 11:49:58.596856   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 36/120
	I0610 11:49:59.598369   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 37/120
	I0610 11:50:00.599770   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 38/120
	I0610 11:50:01.601597   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 39/120
	I0610 11:50:02.603313   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 40/120
	I0610 11:50:03.604581   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 41/120
	I0610 11:50:04.605981   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 42/120
	I0610 11:50:05.607372   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 43/120
	I0610 11:50:06.608768   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 44/120
	I0610 11:50:07.611035   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 45/120
	I0610 11:50:08.612403   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 46/120
	I0610 11:50:09.614239   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 47/120
	I0610 11:50:10.615685   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 48/120
	I0610 11:50:11.617335   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 49/120
	I0610 11:50:12.619418   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 50/120
	I0610 11:50:13.621083   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 51/120
	I0610 11:50:14.622567   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 52/120
	I0610 11:50:15.624043   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 53/120
	I0610 11:50:16.625567   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 54/120
	I0610 11:50:17.627494   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 55/120
	I0610 11:50:18.629821   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 56/120
	I0610 11:50:19.631244   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 57/120
	I0610 11:50:20.632711   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 58/120
	I0610 11:50:21.634056   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 59/120
	I0610 11:50:22.636355   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 60/120
	I0610 11:50:23.637801   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 61/120
	I0610 11:50:24.639258   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 62/120
	I0610 11:50:25.640973   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 63/120
	I0610 11:50:26.642441   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 64/120
	I0610 11:50:27.643819   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 65/120
	I0610 11:50:28.645273   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 66/120
	I0610 11:50:29.647506   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 67/120
	I0610 11:50:30.649057   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 68/120
	I0610 11:50:31.650407   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 69/120
	I0610 11:50:32.652647   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 70/120
	I0610 11:50:33.654885   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 71/120
	I0610 11:50:34.656163   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 72/120
	I0610 11:50:35.657707   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 73/120
	I0610 11:50:36.659093   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 74/120
	I0610 11:50:37.660653   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 75/120
	I0610 11:50:38.662062   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 76/120
	I0610 11:50:39.664149   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 77/120
	I0610 11:50:40.665774   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 78/120
	I0610 11:50:41.667769   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 79/120
	I0610 11:50:42.669888   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 80/120
	I0610 11:50:43.671454   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 81/120
	I0610 11:50:44.672939   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 82/120
	I0610 11:50:45.674404   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 83/120
	I0610 11:50:46.675766   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 84/120
	I0610 11:50:47.678024   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 85/120
	I0610 11:50:48.680048   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 86/120
	I0610 11:50:49.682175   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 87/120
	I0610 11:50:50.683558   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 88/120
	I0610 11:50:51.685124   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 89/120
	I0610 11:50:52.687192   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 90/120
	I0610 11:50:53.690211   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 91/120
	I0610 11:50:54.692615   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 92/120
	I0610 11:50:55.694003   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 93/120
	I0610 11:50:56.695536   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 94/120
	I0610 11:50:57.697451   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 95/120
	I0610 11:50:58.699554   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 96/120
	I0610 11:50:59.702041   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 97/120
	I0610 11:51:00.703450   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 98/120
	I0610 11:51:01.704544   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 99/120
	I0610 11:51:02.706481   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 100/120
	I0610 11:51:03.707757   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 101/120
	I0610 11:51:04.709420   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 102/120
	I0610 11:51:05.711102   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 103/120
	I0610 11:51:06.712512   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 104/120
	I0610 11:51:07.714104   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 105/120
	I0610 11:51:08.715381   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 106/120
	I0610 11:51:09.716795   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 107/120
	I0610 11:51:10.718506   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 108/120
	I0610 11:51:11.720846   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 109/120
	I0610 11:51:12.722956   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 110/120
	I0610 11:51:13.724430   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 111/120
	I0610 11:51:14.726535   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 112/120
	I0610 11:51:15.727793   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 113/120
	I0610 11:51:16.729150   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 114/120
	I0610 11:51:17.730907   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 115/120
	I0610 11:51:18.732381   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 116/120
	I0610 11:51:19.734296   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 117/120
	I0610 11:51:20.735624   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 118/120
	I0610 11:51:21.737075   59496 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for machine to stop 119/120
	I0610 11:51:22.737493   59496 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0610 11:51:22.737537   59496 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0610 11:51:22.739641   59496 out.go:177] 
	W0610 11:51:22.741132   59496 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0610 11:51:22.741162   59496 out.go:239] * 
	* 
	W0610 11:51:22.743832   59496 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:51:22.745272   59496 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-281114 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114: exit status 3 (18.502796328s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:51:41.249242   59940 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.222:22: connect: no route to host
	E0610 11:51:41.249264   59940 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.222:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-281114" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114: exit status 3 (3.167753593s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:51:44.417312   60035 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.222:22: connect: no route to host
	E0610 11:51:44.417338   60035 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.222:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-281114 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-281114 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.151874857s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.222:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-281114 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114: exit status 3 (3.063982655s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0610 11:51:53.633357   60100 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.222:22: connect: no route to host
	E0610 11:51:53.633379   60100 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.222:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-281114" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832735 -n embed-certs-832735
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-10 12:02:36.041167118 +0000 UTC m=+6109.127198612
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832735 -n embed-certs-832735
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-832735 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-832735 logs -n 25: (1.298908444s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-324836                              | cert-expiration-324836       | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-036579 | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	|         | disable-driver-mounts-036579                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-832735            | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC | 10 Jun 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	| addons  | enable metrics-server -p no-preload-298179             | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-832735                 | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-166693        | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-298179                  | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:49 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-166693             | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281114  | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC | 10 Jun 24 11:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC |                     |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281114       | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC | 10 Jun 24 12:02 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 11:51:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 11:51:53.675460   60146 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:51:53.675676   60146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:51:53.675684   60146 out.go:304] Setting ErrFile to fd 2...
	I0610 11:51:53.675688   60146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:51:53.675848   60146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:51:53.676386   60146 out.go:298] Setting JSON to false
	I0610 11:51:53.677403   60146 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5655,"bootTime":1718014659,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:51:53.677465   60146 start.go:139] virtualization: kvm guest
	I0610 11:51:53.679851   60146 out.go:177] * [default-k8s-diff-port-281114] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:51:53.681209   60146 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:51:53.682492   60146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:51:53.681162   60146 notify.go:220] Checking for updates...
	I0610 11:51:53.683939   60146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:51:53.685202   60146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:51:53.686363   60146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:51:53.687770   60146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:51:53.689668   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:51:53.690093   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.690167   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.705134   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0610 11:51:53.705647   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.706289   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.706314   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.706603   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.706788   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.707058   60146 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:51:53.707411   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.707451   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.722927   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0610 11:51:53.723433   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.723927   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.723953   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.724482   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.724651   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.763209   60146 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 11:51:53.764436   60146 start.go:297] selected driver: kvm2
	I0610 11:51:53.764446   60146 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:51:53.764537   60146 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:51:53.765172   60146 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:51:53.765257   60146 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 11:51:53.782641   60146 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 11:51:53.783044   60146 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:51:53.783099   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:51:53.783109   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:51:53.783152   60146 start.go:340] cluster config:
	{Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:51:53.783254   60146 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:51:53.786018   60146 out.go:177] * Starting "default-k8s-diff-port-281114" primary control-plane node in "default-k8s-diff-port-281114" cluster
	I0610 11:51:53.787303   60146 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:51:53.787344   60146 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 11:51:53.787357   60146 cache.go:56] Caching tarball of preloaded images
	I0610 11:51:53.787439   60146 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:51:53.787455   60146 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 11:51:53.787569   60146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/config.json ...
	I0610 11:51:53.787799   60146 start.go:360] acquireMachinesLock for default-k8s-diff-port-281114: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:51:53.787855   60146 start.go:364] duration metric: took 30.27µs to acquireMachinesLock for "default-k8s-diff-port-281114"
	I0610 11:51:53.787875   60146 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:51:53.787881   60146 fix.go:54] fixHost starting: 
	I0610 11:51:53.788131   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.788165   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.805744   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0610 11:51:53.806279   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.806909   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.806936   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.807346   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.807532   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.807718   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 11:51:53.809469   60146 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281114: state=Running err=<nil>
	W0610 11:51:53.809507   60146 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:51:53.811518   60146 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-281114" VM ...
	I0610 11:51:50.691535   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:52.691588   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:54.692007   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:54.248038   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:54.261302   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:54.261375   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:54.293194   57945 cri.go:89] found id: ""
	I0610 11:51:54.293228   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.293240   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:54.293247   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:54.293307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:54.326656   57945 cri.go:89] found id: ""
	I0610 11:51:54.326687   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.326699   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:54.326707   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:54.326764   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:54.359330   57945 cri.go:89] found id: ""
	I0610 11:51:54.359365   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.359378   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:54.359386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:54.359450   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:54.391520   57945 cri.go:89] found id: ""
	I0610 11:51:54.391549   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.391558   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:54.391565   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:54.391642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:54.426803   57945 cri.go:89] found id: ""
	I0610 11:51:54.426840   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.426850   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:54.426860   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:54.426936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:54.462618   57945 cri.go:89] found id: ""
	I0610 11:51:54.462645   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.462653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:54.462659   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:54.462728   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:54.494599   57945 cri.go:89] found id: ""
	I0610 11:51:54.494631   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.494642   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:54.494650   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:54.494701   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:54.528236   57945 cri.go:89] found id: ""
	I0610 11:51:54.528265   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.528280   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:54.528290   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:54.528305   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:54.579562   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:54.579604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:54.592871   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:54.592899   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:54.661928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:54.661950   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:54.661984   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:54.741578   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:54.741611   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:53.939312   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:55.940181   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:53.812752   60146 machine.go:94] provisionDockerMachine start ...
	I0610 11:51:53.812779   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.813001   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:51:53.815580   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:51:53.815981   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:47:50 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:51:53.816013   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:51:53.816111   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:51:53.816288   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:51:53.816435   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:51:53.816577   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:51:53.816743   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:51:53.817141   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:51:53.817157   60146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:51:56.705435   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:51:56.692515   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:59.192511   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:57.283397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:57.296631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:57.296704   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:57.328185   57945 cri.go:89] found id: ""
	I0610 11:51:57.328217   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.328228   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:57.328237   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:57.328302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:57.360137   57945 cri.go:89] found id: ""
	I0610 11:51:57.360163   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.360173   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:57.360188   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:57.360244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:57.395638   57945 cri.go:89] found id: ""
	I0610 11:51:57.395680   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.395691   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:57.395700   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:57.395765   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:57.429024   57945 cri.go:89] found id: ""
	I0610 11:51:57.429051   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.429062   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:57.429070   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:57.429132   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:57.461726   57945 cri.go:89] found id: ""
	I0610 11:51:57.461757   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.461767   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:57.461773   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:57.461838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:57.495055   57945 cri.go:89] found id: ""
	I0610 11:51:57.495078   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.495086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:57.495092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:57.495138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:57.526495   57945 cri.go:89] found id: ""
	I0610 11:51:57.526521   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.526530   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:57.526536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:57.526598   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:57.559160   57945 cri.go:89] found id: ""
	I0610 11:51:57.559181   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.559189   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:57.559197   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:57.559212   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:57.593801   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:57.593827   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:57.641074   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:57.641106   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:57.654097   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:57.654124   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:57.726137   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:57.726160   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:57.726176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:00.302303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:00.314500   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:00.314560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:00.345865   57945 cri.go:89] found id: ""
	I0610 11:52:00.345889   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.345897   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:00.345902   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:00.345946   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:00.377383   57945 cri.go:89] found id: ""
	I0610 11:52:00.377405   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.377412   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:00.377417   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:00.377482   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:00.408667   57945 cri.go:89] found id: ""
	I0610 11:52:00.408694   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.408701   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:00.408706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:00.408755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:00.444349   57945 cri.go:89] found id: ""
	I0610 11:52:00.444379   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.444390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:00.444397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:00.444455   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:00.477886   57945 cri.go:89] found id: ""
	I0610 11:52:00.477910   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.477918   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:00.477924   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:00.477982   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:00.508996   57945 cri.go:89] found id: ""
	I0610 11:52:00.509023   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.509030   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:00.509036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:00.509097   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:00.541548   57945 cri.go:89] found id: ""
	I0610 11:52:00.541572   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.541580   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:00.541585   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:00.541642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:00.574507   57945 cri.go:89] found id: ""
	I0610 11:52:00.574534   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.574541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:00.574550   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:00.574565   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:00.610838   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:00.610862   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:00.661155   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:00.661197   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:00.674122   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:00.674154   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:00.745943   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:00.745976   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:00.745993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:58.439245   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:00.441145   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:59.777253   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:01.691833   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:04.193279   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:03.325365   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:03.337955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:03.338042   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:03.370767   57945 cri.go:89] found id: ""
	I0610 11:52:03.370798   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.370810   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:03.370818   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:03.370903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:03.402587   57945 cri.go:89] found id: ""
	I0610 11:52:03.402616   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.402623   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:03.402628   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:03.402684   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:03.436751   57945 cri.go:89] found id: ""
	I0610 11:52:03.436778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.436788   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:03.436795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:03.436854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:03.467745   57945 cri.go:89] found id: ""
	I0610 11:52:03.467778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.467788   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:03.467798   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:03.467865   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:03.499321   57945 cri.go:89] found id: ""
	I0610 11:52:03.499347   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.499355   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:03.499361   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:03.499419   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:03.534209   57945 cri.go:89] found id: ""
	I0610 11:52:03.534242   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.534253   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:03.534261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:03.534318   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:03.567837   57945 cri.go:89] found id: ""
	I0610 11:52:03.567871   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.567882   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:03.567889   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:03.567954   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:03.604223   57945 cri.go:89] found id: ""
	I0610 11:52:03.604249   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.604258   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:03.604266   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:03.604280   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:03.659716   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:03.659751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:03.673389   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:03.673425   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:03.746076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:03.746104   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:03.746118   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:03.825803   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:03.825837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.362151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:06.375320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:06.375394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:06.409805   57945 cri.go:89] found id: ""
	I0610 11:52:06.409840   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.409851   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:06.409859   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:06.409914   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:06.447126   57945 cri.go:89] found id: ""
	I0610 11:52:06.447157   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.447167   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:06.447174   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:06.447237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:06.479443   57945 cri.go:89] found id: ""
	I0610 11:52:06.479472   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.479483   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:06.479489   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:06.479546   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:06.511107   57945 cri.go:89] found id: ""
	I0610 11:52:06.511137   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.511148   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:06.511163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:06.511223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:06.542727   57945 cri.go:89] found id: ""
	I0610 11:52:06.542753   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.542761   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:06.542767   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:06.542812   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:06.582141   57945 cri.go:89] found id: ""
	I0610 11:52:06.582166   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.582174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:06.582180   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:06.582239   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:06.615203   57945 cri.go:89] found id: ""
	I0610 11:52:06.615230   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.615240   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:06.615248   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:06.615314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:06.650286   57945 cri.go:89] found id: ""
	I0610 11:52:06.650310   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.650317   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:06.650326   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:06.650338   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:06.721601   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:06.721631   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:06.721646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:06.794645   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:06.794679   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.830598   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:06.830628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:06.880740   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:06.880786   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:02.939105   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:04.939366   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:07.439715   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:05.861224   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:06.691130   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:09.191608   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:09.394202   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:09.409822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:09.409898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:09.451573   57945 cri.go:89] found id: ""
	I0610 11:52:09.451597   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.451605   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:09.451611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:09.451663   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:09.491039   57945 cri.go:89] found id: ""
	I0610 11:52:09.491069   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.491080   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:09.491087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:09.491147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:09.522023   57945 cri.go:89] found id: ""
	I0610 11:52:09.522050   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.522058   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:09.522063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:09.522108   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:09.554014   57945 cri.go:89] found id: ""
	I0610 11:52:09.554040   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.554048   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:09.554057   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:09.554127   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:09.586285   57945 cri.go:89] found id: ""
	I0610 11:52:09.586318   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.586328   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:09.586336   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:09.586396   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:09.618362   57945 cri.go:89] found id: ""
	I0610 11:52:09.618391   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.618401   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:09.618408   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:09.618465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:09.651067   57945 cri.go:89] found id: ""
	I0610 11:52:09.651097   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.651108   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:09.651116   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:09.651174   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:09.682764   57945 cri.go:89] found id: ""
	I0610 11:52:09.682792   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.682799   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:09.682807   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:09.682819   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:09.755071   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:09.755096   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:09.755109   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:09.833635   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:09.833672   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:09.869744   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:09.869777   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:09.924045   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:09.924079   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:09.440296   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:11.939025   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:08.929184   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:11.691213   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:13.693439   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:12.438029   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:12.452003   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:12.452070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:12.485680   57945 cri.go:89] found id: ""
	I0610 11:52:12.485711   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.485719   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:12.485725   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:12.485773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:12.519200   57945 cri.go:89] found id: ""
	I0610 11:52:12.519227   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.519238   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:12.519245   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:12.519317   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:12.553154   57945 cri.go:89] found id: ""
	I0610 11:52:12.553179   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.553185   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:12.553191   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:12.553237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:12.584499   57945 cri.go:89] found id: ""
	I0610 11:52:12.584543   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.584555   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:12.584564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:12.584619   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:12.619051   57945 cri.go:89] found id: ""
	I0610 11:52:12.619079   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.619094   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:12.619102   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:12.619165   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:12.653652   57945 cri.go:89] found id: ""
	I0610 11:52:12.653690   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.653702   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:12.653710   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:12.653773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:12.685887   57945 cri.go:89] found id: ""
	I0610 11:52:12.685919   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.685930   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:12.685938   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:12.685997   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:12.719534   57945 cri.go:89] found id: ""
	I0610 11:52:12.719567   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.719578   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:12.719591   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:12.719603   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:12.770689   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:12.770725   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:12.783574   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:12.783604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:12.855492   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:12.855518   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:12.855529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:12.928993   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:12.929037   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:15.487670   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:15.501367   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:15.501437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:15.534205   57945 cri.go:89] found id: ""
	I0610 11:52:15.534248   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.534256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:15.534262   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:15.534315   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:15.570972   57945 cri.go:89] found id: ""
	I0610 11:52:15.571001   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.571008   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:15.571013   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:15.571073   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:15.604233   57945 cri.go:89] found id: ""
	I0610 11:52:15.604258   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.604267   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:15.604273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:15.604328   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:15.637119   57945 cri.go:89] found id: ""
	I0610 11:52:15.637150   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.637159   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:15.637167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:15.637226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:15.670548   57945 cri.go:89] found id: ""
	I0610 11:52:15.670572   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.670580   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:15.670586   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:15.670644   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:15.706374   57945 cri.go:89] found id: ""
	I0610 11:52:15.706398   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.706406   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:15.706412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:15.706457   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:15.742828   57945 cri.go:89] found id: ""
	I0610 11:52:15.742852   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.742859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:15.742865   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:15.742910   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:15.773783   57945 cri.go:89] found id: ""
	I0610 11:52:15.773811   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.773818   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:15.773825   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:15.773835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:15.828725   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:15.828764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:15.842653   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:15.842682   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:15.919771   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:15.919794   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:15.919809   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:15.994439   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:15.994478   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:13.943213   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:16.439647   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:15.009211   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:18.081244   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:16.191615   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:18.191760   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:18.532040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:18.544800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:18.544893   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:18.579148   57945 cri.go:89] found id: ""
	I0610 11:52:18.579172   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.579180   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:18.579186   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:18.579236   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:18.613005   57945 cri.go:89] found id: ""
	I0610 11:52:18.613028   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.613035   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:18.613042   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:18.613094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:18.648843   57945 cri.go:89] found id: ""
	I0610 11:52:18.648870   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.648878   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:18.648883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:18.648939   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:18.678943   57945 cri.go:89] found id: ""
	I0610 11:52:18.678974   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.679014   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:18.679022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:18.679082   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:18.728485   57945 cri.go:89] found id: ""
	I0610 11:52:18.728516   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.728527   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:18.728535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:18.728605   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:18.764320   57945 cri.go:89] found id: ""
	I0610 11:52:18.764352   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.764363   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:18.764370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:18.764431   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:18.797326   57945 cri.go:89] found id: ""
	I0610 11:52:18.797358   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.797369   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:18.797377   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:18.797440   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:18.832517   57945 cri.go:89] found id: ""
	I0610 11:52:18.832552   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.832563   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:18.832574   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:18.832588   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:18.845158   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:18.845192   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:18.915928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:18.915959   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:18.915974   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:18.990583   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:18.990625   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:19.029044   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:19.029069   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.582973   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:21.596373   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:21.596453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:21.633497   57945 cri.go:89] found id: ""
	I0610 11:52:21.633528   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.633538   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:21.633546   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:21.633631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:21.663999   57945 cri.go:89] found id: ""
	I0610 11:52:21.664055   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.664069   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:21.664078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:21.664138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:21.698105   57945 cri.go:89] found id: ""
	I0610 11:52:21.698136   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.698147   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:21.698155   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:21.698213   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:21.730036   57945 cri.go:89] found id: ""
	I0610 11:52:21.730061   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.730068   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:21.730074   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:21.730119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:21.764484   57945 cri.go:89] found id: ""
	I0610 11:52:21.764507   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.764515   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:21.764520   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:21.764575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:21.797366   57945 cri.go:89] found id: ""
	I0610 11:52:21.797397   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.797408   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:21.797415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:21.797478   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:21.832991   57945 cri.go:89] found id: ""
	I0610 11:52:21.833023   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.833030   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:21.833035   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:21.833081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:21.868859   57945 cri.go:89] found id: ""
	I0610 11:52:21.868890   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.868899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:21.868924   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:21.868937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.918976   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:21.919013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:21.934602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:21.934629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:22.002888   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:22.002909   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:22.002920   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:22.082894   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:22.082941   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:18.439853   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:20.942040   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:20.692398   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:23.191532   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:24.620683   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:24.634200   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:24.634280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:24.667181   57945 cri.go:89] found id: ""
	I0610 11:52:24.667209   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.667217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:24.667222   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:24.667277   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:24.702114   57945 cri.go:89] found id: ""
	I0610 11:52:24.702142   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.702151   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:24.702158   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:24.702220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:24.734464   57945 cri.go:89] found id: ""
	I0610 11:52:24.734488   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.734497   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:24.734502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:24.734565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:24.767074   57945 cri.go:89] found id: ""
	I0610 11:52:24.767124   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.767132   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:24.767138   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:24.767210   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:24.800328   57945 cri.go:89] found id: ""
	I0610 11:52:24.800358   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.800369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:24.800376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:24.800442   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:24.837785   57945 cri.go:89] found id: ""
	I0610 11:52:24.837814   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.837822   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:24.837828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:24.837878   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:24.874886   57945 cri.go:89] found id: ""
	I0610 11:52:24.874910   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.874917   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:24.874923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:24.874968   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:24.912191   57945 cri.go:89] found id: ""
	I0610 11:52:24.912217   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.912235   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:24.912247   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:24.912265   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:24.968229   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:24.968262   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:24.981018   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:24.981048   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:25.049879   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:25.049907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:25.049922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:25.135103   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:25.135156   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:23.440293   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:25.939540   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.201186   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:25.691136   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.691669   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.687667   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:27.700418   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:27.700486   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:27.733712   57945 cri.go:89] found id: ""
	I0610 11:52:27.733740   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.733749   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:27.733754   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:27.733839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:27.774063   57945 cri.go:89] found id: ""
	I0610 11:52:27.774089   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.774100   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:27.774108   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:27.774169   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:27.813906   57945 cri.go:89] found id: ""
	I0610 11:52:27.813945   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.813956   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:27.813963   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:27.814031   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:27.845877   57945 cri.go:89] found id: ""
	I0610 11:52:27.845901   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.845909   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:27.845915   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:27.845961   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:27.880094   57945 cri.go:89] found id: ""
	I0610 11:52:27.880139   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.880148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:27.880153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:27.880206   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:27.914308   57945 cri.go:89] found id: ""
	I0610 11:52:27.914332   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.914342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:27.914355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:27.914420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:27.949386   57945 cri.go:89] found id: ""
	I0610 11:52:27.949412   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.949423   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:27.949430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:27.949490   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:27.983901   57945 cri.go:89] found id: ""
	I0610 11:52:27.983927   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.983938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:27.983948   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:27.983963   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:28.032820   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:28.032853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:28.046306   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:28.046332   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:28.120614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:28.120642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:28.120657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:28.202182   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:28.202217   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:30.741274   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:30.754276   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:30.754358   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:30.789142   57945 cri.go:89] found id: ""
	I0610 11:52:30.789174   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.789185   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:30.789193   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:30.789255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:30.822319   57945 cri.go:89] found id: ""
	I0610 11:52:30.822350   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.822362   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:30.822369   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:30.822428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:30.853166   57945 cri.go:89] found id: ""
	I0610 11:52:30.853192   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.853199   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:30.853204   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:30.853271   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:30.892290   57945 cri.go:89] found id: ""
	I0610 11:52:30.892320   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.892331   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:30.892339   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:30.892401   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:30.938603   57945 cri.go:89] found id: ""
	I0610 11:52:30.938629   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.938639   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:30.938646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:30.938703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:30.994532   57945 cri.go:89] found id: ""
	I0610 11:52:30.994567   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.994583   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:30.994589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:30.994649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:31.041818   57945 cri.go:89] found id: ""
	I0610 11:52:31.041847   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.041859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:31.041867   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:31.041923   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:31.079897   57945 cri.go:89] found id: ""
	I0610 11:52:31.079927   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.079938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:31.079951   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:31.079967   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:31.092291   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:31.092321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:31.163921   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:31.163943   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:31.163955   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:31.242247   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:31.242287   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:31.281257   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:31.281286   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:27.940743   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:30.440529   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:30.273256   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:30.192386   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:32.192470   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:34.691408   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:33.837783   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:33.851085   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:33.851164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:33.885285   57945 cri.go:89] found id: ""
	I0610 11:52:33.885314   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.885324   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:33.885332   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:33.885391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:33.924958   57945 cri.go:89] found id: ""
	I0610 11:52:33.924996   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.925006   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:33.925022   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:33.925083   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:33.958563   57945 cri.go:89] found id: ""
	I0610 11:52:33.958589   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.958598   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:33.958606   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:33.958665   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:33.991575   57945 cri.go:89] found id: ""
	I0610 11:52:33.991606   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.991616   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:33.991624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:33.991693   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:34.029700   57945 cri.go:89] found id: ""
	I0610 11:52:34.029729   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.029740   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:34.029748   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:34.029805   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:34.068148   57945 cri.go:89] found id: ""
	I0610 11:52:34.068183   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.068194   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:34.068201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:34.068275   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:34.100735   57945 cri.go:89] found id: ""
	I0610 11:52:34.100760   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.100767   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:34.100772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:34.100817   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:34.132898   57945 cri.go:89] found id: ""
	I0610 11:52:34.132927   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.132937   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:34.132958   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:34.132972   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:34.184690   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:34.184723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:34.199604   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:34.199641   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:34.270744   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:34.270763   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:34.270775   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:34.352291   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:34.352334   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:36.894188   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:36.914098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:36.914158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:36.957378   57945 cri.go:89] found id: ""
	I0610 11:52:36.957408   57945 logs.go:276] 0 containers: []
	W0610 11:52:36.957419   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:36.957427   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:36.957498   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:37.003576   57945 cri.go:89] found id: ""
	I0610 11:52:37.003602   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.003611   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:37.003618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:37.003677   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:37.040221   57945 cri.go:89] found id: ""
	I0610 11:52:37.040245   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.040253   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:37.040259   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:37.040307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:37.078151   57945 cri.go:89] found id: ""
	I0610 11:52:37.078185   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.078195   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:37.078202   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:37.078261   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:37.117446   57945 cri.go:89] found id: ""
	I0610 11:52:37.117468   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.117476   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:37.117482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:37.117548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:37.155320   57945 cri.go:89] found id: ""
	I0610 11:52:37.155344   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.155356   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:37.155364   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:37.155414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:37.192194   57945 cri.go:89] found id: ""
	I0610 11:52:37.192221   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.192230   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:37.192238   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:37.192303   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:37.225567   57945 cri.go:89] found id: ""
	I0610 11:52:37.225594   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.225605   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:37.225616   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:37.225632   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:37.240139   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:37.240164   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:52:32.940571   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:34.940672   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:37.440898   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:36.353199   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:36.697419   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:39.190952   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	W0610 11:52:37.307754   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:37.307784   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:37.307801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:37.385929   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:37.385964   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:37.424991   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:37.425029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:39.974839   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:39.988788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:39.988858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:40.025922   57945 cri.go:89] found id: ""
	I0610 11:52:40.025947   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.025954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:40.025967   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:40.026026   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:40.062043   57945 cri.go:89] found id: ""
	I0610 11:52:40.062076   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.062085   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:40.062094   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:40.062158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:40.095441   57945 cri.go:89] found id: ""
	I0610 11:52:40.095465   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.095472   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:40.095478   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:40.095529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:40.127633   57945 cri.go:89] found id: ""
	I0610 11:52:40.127662   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.127672   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:40.127680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:40.127740   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:40.161232   57945 cri.go:89] found id: ""
	I0610 11:52:40.161257   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.161267   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:40.161274   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:40.161334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:40.194491   57945 cri.go:89] found id: ""
	I0610 11:52:40.194521   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.194529   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:40.194535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:40.194583   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:40.226376   57945 cri.go:89] found id: ""
	I0610 11:52:40.226404   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.226411   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:40.226416   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:40.226465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:40.257938   57945 cri.go:89] found id: ""
	I0610 11:52:40.257968   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.257978   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:40.257988   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:40.258004   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:40.327247   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:40.327276   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:40.327291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:40.404231   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:40.404263   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:40.441554   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:40.441585   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:40.491952   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:40.491987   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:39.939538   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:41.939639   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:39.425159   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:41.191808   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:43.695646   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:43.006217   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:43.019113   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:43.019187   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:43.053010   57945 cri.go:89] found id: ""
	I0610 11:52:43.053035   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.053045   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:43.053051   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:43.053115   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:43.086118   57945 cri.go:89] found id: ""
	I0610 11:52:43.086145   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.086156   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:43.086171   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:43.086235   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:43.117892   57945 cri.go:89] found id: ""
	I0610 11:52:43.117919   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.117929   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:43.117937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:43.118011   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:43.149751   57945 cri.go:89] found id: ""
	I0610 11:52:43.149777   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.149787   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:43.149795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:43.149855   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:43.184215   57945 cri.go:89] found id: ""
	I0610 11:52:43.184250   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.184261   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:43.184268   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:43.184332   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:43.219758   57945 cri.go:89] found id: ""
	I0610 11:52:43.219787   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.219797   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:43.219805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:43.219868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:43.250698   57945 cri.go:89] found id: ""
	I0610 11:52:43.250728   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.250738   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:43.250746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:43.250803   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:43.286526   57945 cri.go:89] found id: ""
	I0610 11:52:43.286556   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.286566   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:43.286576   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:43.286589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:43.362219   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:43.362255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:43.398332   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:43.398366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:43.449468   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:43.449502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:43.462346   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:43.462381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:43.539578   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:46.039720   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:46.052749   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:46.052821   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:46.093110   57945 cri.go:89] found id: ""
	I0610 11:52:46.093139   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.093147   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:46.093152   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:46.093219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:46.130885   57945 cri.go:89] found id: ""
	I0610 11:52:46.130916   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.130924   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:46.130930   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:46.130977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:46.167471   57945 cri.go:89] found id: ""
	I0610 11:52:46.167507   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.167524   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:46.167531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:46.167593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:46.204776   57945 cri.go:89] found id: ""
	I0610 11:52:46.204799   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.204807   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:46.204812   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:46.204860   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:46.244826   57945 cri.go:89] found id: ""
	I0610 11:52:46.244859   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.244869   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:46.244876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:46.244942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:46.281757   57945 cri.go:89] found id: ""
	I0610 11:52:46.281783   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.281791   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:46.281797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:46.281844   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:46.319517   57945 cri.go:89] found id: ""
	I0610 11:52:46.319546   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.319558   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:46.319566   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:46.319636   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:46.355806   57945 cri.go:89] found id: ""
	I0610 11:52:46.355835   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.355846   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:46.355858   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:46.355872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:46.433087   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:46.433131   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:46.468792   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:46.468829   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:46.517931   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:46.517969   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:46.530892   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:46.530935   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:46.592585   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:43.940733   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:46.440354   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:45.505281   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:48.577214   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:46.191520   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:48.691214   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:49.093662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:49.106539   57945 kubeadm.go:591] duration metric: took 4m4.396325615s to restartPrimaryControlPlane
	W0610 11:52:49.106625   57945 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 11:52:49.106658   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:52:48.441202   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:50.433923   57572 pod_ready.go:81] duration metric: took 4m0.000312516s for pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace to be "Ready" ...
	E0610 11:52:50.433960   57572 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0610 11:52:50.433982   57572 pod_ready.go:38] duration metric: took 4m5.113212783s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:52:50.434008   57572 kubeadm.go:591] duration metric: took 4m16.406085019s to restartPrimaryControlPlane
	W0610 11:52:50.434091   57572 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 11:52:50.434128   57572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:52:53.503059   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.396374472s)
	I0610 11:52:53.503148   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:52:53.518235   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:52:53.529298   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:52:53.539273   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:52:53.539297   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:52:53.539341   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:52:53.548285   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:52:53.548354   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:52:53.557659   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:52:53.569253   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:52:53.569330   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:52:53.579689   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.589800   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:52:53.589865   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.600324   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:52:53.610542   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:52:53.610612   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:52:53.620144   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:52:53.687195   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:52:53.687302   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:52:53.851035   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:52:53.851178   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:52:53.851305   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:52:54.037503   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:52:54.039523   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:52:54.039621   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:52:54.039718   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:52:54.039850   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:52:54.039959   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:52:54.040055   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:52:54.040135   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:52:54.040233   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:52:54.040506   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:52:54.040892   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:52:54.041344   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:52:54.041411   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:52:54.041507   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:52:54.151486   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:52:54.389555   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:52:54.507653   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:52:54.690886   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:52:54.708542   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:52:54.712251   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:52:54.712504   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:52:54.872755   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:52:50.691517   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:53.191418   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:54.874801   57945 out.go:204]   - Booting up control plane ...
	I0610 11:52:54.874978   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:52:54.883224   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:52:54.885032   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:52:54.886182   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:52:54.891030   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:52:54.661214   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:57.729160   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:55.691987   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:58.192548   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:00.692060   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:03.192673   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:03.809217   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:06.885176   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:05.692004   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:07.692545   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:12.961318   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:10.191064   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:12.192258   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:14.691564   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:16.033278   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:16.691670   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:18.691801   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:21.778313   57572 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.344150357s)
	I0610 11:53:21.778398   57572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:21.793960   57572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:53:21.803952   57572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:53:21.813685   57572 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:53:21.813709   57572 kubeadm.go:156] found existing configuration files:
	
	I0610 11:53:21.813758   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:53:21.823957   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:53:21.824027   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:53:21.833125   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:53:21.841834   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:53:21.841893   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:53:21.850999   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:53:21.859858   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:53:21.859920   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:53:21.869076   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:53:21.877079   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:53:21.877141   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:53:21.887614   57572 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:53:21.941932   57572 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 11:53:21.941987   57572 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:53:22.084118   57572 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:53:22.084219   57572 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:53:22.084310   57572 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:53:22.287685   57572 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:53:22.289568   57572 out.go:204]   - Generating certificates and keys ...
	I0610 11:53:22.289674   57572 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:53:22.289779   57572 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:53:22.289917   57572 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:53:22.290032   57572 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:53:22.290144   57572 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:53:22.290234   57572 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:53:22.290339   57572 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:53:22.290439   57572 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:53:22.290558   57572 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:53:22.290674   57572 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:53:22.290732   57572 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:53:22.290819   57572 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:53:22.354674   57572 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:53:22.573948   57572 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 11:53:22.805694   57572 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:53:22.914740   57572 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:53:23.218887   57572 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:53:23.221479   57572 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:53:23.223937   57572 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:53:22.113312   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:20.692241   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:23.192124   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:23.695912   56769 pod_ready.go:81] duration metric: took 4m0.01073501s for pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace to be "Ready" ...
	E0610 11:53:23.695944   56769 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0610 11:53:23.695954   56769 pod_ready.go:38] duration metric: took 4m2.412094982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:23.695972   56769 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:53:23.696001   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:23.696058   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:23.758822   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:23.758850   56769 cri.go:89] found id: ""
	I0610 11:53:23.758860   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:23.758921   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.765128   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:23.765198   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:23.798454   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:23.798483   56769 cri.go:89] found id: ""
	I0610 11:53:23.798494   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:23.798560   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.802985   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:23.803051   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:23.855781   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:23.855810   56769 cri.go:89] found id: ""
	I0610 11:53:23.855819   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:23.855873   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.860285   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:23.860363   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:23.901849   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:23.901868   56769 cri.go:89] found id: ""
	I0610 11:53:23.901878   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:23.901935   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.906116   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:23.906183   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:23.941376   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:23.941396   56769 cri.go:89] found id: ""
	I0610 11:53:23.941405   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:23.941463   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.947379   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:23.947450   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:23.984733   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:23.984757   56769 cri.go:89] found id: ""
	I0610 11:53:23.984766   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:23.984839   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.988701   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:23.988752   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:24.024067   56769 cri.go:89] found id: ""
	I0610 11:53:24.024094   56769 logs.go:276] 0 containers: []
	W0610 11:53:24.024103   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:24.024110   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:24.024170   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:24.058220   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:24.058250   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:24.058255   56769 cri.go:89] found id: ""
	I0610 11:53:24.058263   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:24.058321   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:24.062072   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:24.065706   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:24.065723   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:24.104622   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:24.104652   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:24.142432   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:24.142457   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:24.670328   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:24.670375   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:24.726557   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:24.726592   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:24.769111   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:24.769150   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:24.811199   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:24.811246   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:24.876489   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:24.876547   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:23.225694   57572 out.go:204]   - Booting up control plane ...
	I0610 11:53:23.225803   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:53:23.225898   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:53:23.226004   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:53:23.245138   57572 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:53:23.246060   57572 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:53:23.246121   57572 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:53:23.375562   57572 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 11:53:23.375689   57572 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 11:53:23.877472   57572 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.888048ms
	I0610 11:53:23.877560   57572 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 11:53:25.185274   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:28.879976   57572 kubeadm.go:309] [api-check] The API server is healthy after 5.002334008s
	I0610 11:53:28.902382   57572 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 11:53:28.924552   57572 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 11:53:28.956686   57572 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 11:53:28.956958   57572 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-298179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 11:53:28.971883   57572 kubeadm.go:309] [bootstrap-token] Using token: zdzp8m.ttyzgfzbws24vbk8
	I0610 11:53:24.916641   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:24.916824   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:24.980737   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:24.980779   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:24.998139   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:24.998163   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:25.113809   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:25.113839   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:25.168214   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:25.168254   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:27.708296   56769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:53:27.730996   56769 api_server.go:72] duration metric: took 4m14.155149231s to wait for apiserver process to appear ...
	I0610 11:53:27.731021   56769 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:53:27.731057   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:27.731116   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:27.767385   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:27.767411   56769 cri.go:89] found id: ""
	I0610 11:53:27.767420   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:27.767465   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.771646   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:27.771723   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:27.806969   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:27.806996   56769 cri.go:89] found id: ""
	I0610 11:53:27.807005   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:27.807060   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.811580   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:27.811655   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:27.850853   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:27.850879   56769 cri.go:89] found id: ""
	I0610 11:53:27.850888   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:27.850947   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.855284   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:27.855347   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:27.901228   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:27.901256   56769 cri.go:89] found id: ""
	I0610 11:53:27.901266   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:27.901322   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.905361   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:27.905428   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:27.943162   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:27.943187   56769 cri.go:89] found id: ""
	I0610 11:53:27.943197   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:27.943251   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.951934   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:27.952015   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:27.996288   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:27.996316   56769 cri.go:89] found id: ""
	I0610 11:53:27.996325   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:27.996381   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.000307   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:28.000378   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:28.036978   56769 cri.go:89] found id: ""
	I0610 11:53:28.037016   56769 logs.go:276] 0 containers: []
	W0610 11:53:28.037026   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:28.037033   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:28.037091   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:28.078338   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:28.078363   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:28.078368   56769 cri.go:89] found id: ""
	I0610 11:53:28.078377   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:28.078433   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.082899   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.087382   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:28.087416   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:28.123014   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:28.123051   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:28.186128   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:28.186160   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:28.314495   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:28.314539   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:28.358953   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:28.358981   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:28.394280   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:28.394306   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:28.450138   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:28.450172   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:28.851268   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:28.851307   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:28.909176   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:28.909202   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:28.927322   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:28.927359   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:28.983941   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:28.983971   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:29.023327   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:29.023352   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:29.063624   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:29.063655   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:28.973316   57572 out.go:204]   - Configuring RBAC rules ...
	I0610 11:53:28.973437   57572 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 11:53:28.979726   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 11:53:28.989075   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 11:53:28.999678   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 11:53:29.005717   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 11:53:29.014439   57572 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 11:53:29.292088   57572 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 11:53:29.734969   57572 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 11:53:30.288723   57572 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 11:53:30.289824   57572 kubeadm.go:309] 
	I0610 11:53:30.289918   57572 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 11:53:30.289930   57572 kubeadm.go:309] 
	I0610 11:53:30.290061   57572 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 11:53:30.290078   57572 kubeadm.go:309] 
	I0610 11:53:30.290107   57572 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 11:53:30.290191   57572 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 11:53:30.290268   57572 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 11:53:30.290316   57572 kubeadm.go:309] 
	I0610 11:53:30.290402   57572 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 11:53:30.290412   57572 kubeadm.go:309] 
	I0610 11:53:30.290481   57572 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 11:53:30.290494   57572 kubeadm.go:309] 
	I0610 11:53:30.290539   57572 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 11:53:30.290602   57572 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 11:53:30.290659   57572 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 11:53:30.290666   57572 kubeadm.go:309] 
	I0610 11:53:30.290749   57572 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 11:53:30.290816   57572 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 11:53:30.290823   57572 kubeadm.go:309] 
	I0610 11:53:30.290901   57572 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zdzp8m.ttyzgfzbws24vbk8 \
	I0610 11:53:30.291011   57572 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 11:53:30.291032   57572 kubeadm.go:309] 	--control-plane 
	I0610 11:53:30.291038   57572 kubeadm.go:309] 
	I0610 11:53:30.291113   57572 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 11:53:30.291120   57572 kubeadm.go:309] 
	I0610 11:53:30.291230   57572 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zdzp8m.ttyzgfzbws24vbk8 \
	I0610 11:53:30.291370   57572 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 11:53:30.291895   57572 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:53:30.291925   57572 cni.go:84] Creating CNI manager for ""
	I0610 11:53:30.291936   57572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:53:30.294227   57572 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 11:53:30.295470   57572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 11:53:30.306011   57572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 11:53:30.322832   57572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 11:53:30.322890   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:30.322960   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-298179 minikube.k8s.io/updated_at=2024_06_10T11_53_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=no-preload-298179 minikube.k8s.io/primary=true
	I0610 11:53:30.486915   57572 ops.go:34] apiserver oom_adj: -16
	I0610 11:53:30.487320   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:30.988103   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.488094   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.988314   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:32.487603   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.265182   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:31.597111   56769 api_server.go:253] Checking apiserver healthz at https://192.168.61.19:8443/healthz ...
	I0610 11:53:31.601589   56769 api_server.go:279] https://192.168.61.19:8443/healthz returned 200:
	ok
	I0610 11:53:31.602609   56769 api_server.go:141] control plane version: v1.30.1
	I0610 11:53:31.602631   56769 api_server.go:131] duration metric: took 3.871604169s to wait for apiserver health ...
	I0610 11:53:31.602639   56769 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:53:31.602663   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:31.602716   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:31.650102   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:31.650130   56769 cri.go:89] found id: ""
	I0610 11:53:31.650139   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:31.650197   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.654234   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:31.654299   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:31.690704   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:31.690736   56769 cri.go:89] found id: ""
	I0610 11:53:31.690750   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:31.690810   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.695139   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:31.695209   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:31.732593   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:31.732614   56769 cri.go:89] found id: ""
	I0610 11:53:31.732621   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:31.732667   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.737201   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:31.737277   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:31.774177   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:31.774219   56769 cri.go:89] found id: ""
	I0610 11:53:31.774239   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:31.774300   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.778617   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:31.778695   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:31.816633   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:31.816657   56769 cri.go:89] found id: ""
	I0610 11:53:31.816665   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:31.816715   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.820846   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:31.820928   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:31.857021   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:31.857052   56769 cri.go:89] found id: ""
	I0610 11:53:31.857062   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:31.857127   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.862825   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:31.862888   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:31.903792   56769 cri.go:89] found id: ""
	I0610 11:53:31.903817   56769 logs.go:276] 0 containers: []
	W0610 11:53:31.903825   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:31.903837   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:31.903885   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:31.942392   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:31.942414   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:31.942419   56769 cri.go:89] found id: ""
	I0610 11:53:31.942428   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:31.942481   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.949047   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.953590   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:31.953625   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:31.991926   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:31.991954   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:32.040857   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:32.040894   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:32.432680   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:32.432731   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:32.474819   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:32.474849   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:32.530152   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:32.530189   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:32.547698   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:32.547735   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:32.598580   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:32.598634   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:32.643864   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:32.643900   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:32.679085   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:32.679118   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:32.714247   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:32.714279   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:32.818508   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:32.818551   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:32.862390   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:32.862424   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:35.408169   56769 system_pods.go:59] 8 kube-system pods found
	I0610 11:53:35.408198   56769 system_pods.go:61] "coredns-7db6d8ff4d-7dlzb" [4b2618cd-b48c-44bd-a07d-4fe4585a14fa] Running
	I0610 11:53:35.408203   56769 system_pods.go:61] "etcd-embed-certs-832735" [4b7d413d-9a2a-4677-b279-5a6d39904679] Running
	I0610 11:53:35.408208   56769 system_pods.go:61] "kube-apiserver-embed-certs-832735" [7e11e03e-7b15-4e9b-8f9a-9a46d7aadd7e] Running
	I0610 11:53:35.408211   56769 system_pods.go:61] "kube-controller-manager-embed-certs-832735" [75aa996d-fdf3-4c32-b25d-03c7582b3502] Running
	I0610 11:53:35.408215   56769 system_pods.go:61] "kube-proxy-b7x2p" [fe1cd055-691f-46b1-ada7-7dded31d2308] Running
	I0610 11:53:35.408218   56769 system_pods.go:61] "kube-scheduler-embed-certs-832735" [b7a7fcfb-7ce9-4470-9052-79bc13029408] Running
	I0610 11:53:35.408223   56769 system_pods.go:61] "metrics-server-569cc877fc-5zg8j" [e979b4b0-356d-479d-990f-d9e6e46a1a9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:35.408233   56769 system_pods.go:61] "storage-provisioner" [47aa143e-3545-492d-ac93-e62f0076e0f4] Running
	I0610 11:53:35.408241   56769 system_pods.go:74] duration metric: took 3.805596332s to wait for pod list to return data ...
	I0610 11:53:35.408248   56769 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:53:35.410634   56769 default_sa.go:45] found service account: "default"
	I0610 11:53:35.410659   56769 default_sa.go:55] duration metric: took 2.405735ms for default service account to be created ...
	I0610 11:53:35.410667   56769 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:53:35.415849   56769 system_pods.go:86] 8 kube-system pods found
	I0610 11:53:35.415871   56769 system_pods.go:89] "coredns-7db6d8ff4d-7dlzb" [4b2618cd-b48c-44bd-a07d-4fe4585a14fa] Running
	I0610 11:53:35.415876   56769 system_pods.go:89] "etcd-embed-certs-832735" [4b7d413d-9a2a-4677-b279-5a6d39904679] Running
	I0610 11:53:35.415881   56769 system_pods.go:89] "kube-apiserver-embed-certs-832735" [7e11e03e-7b15-4e9b-8f9a-9a46d7aadd7e] Running
	I0610 11:53:35.415886   56769 system_pods.go:89] "kube-controller-manager-embed-certs-832735" [75aa996d-fdf3-4c32-b25d-03c7582b3502] Running
	I0610 11:53:35.415890   56769 system_pods.go:89] "kube-proxy-b7x2p" [fe1cd055-691f-46b1-ada7-7dded31d2308] Running
	I0610 11:53:35.415894   56769 system_pods.go:89] "kube-scheduler-embed-certs-832735" [b7a7fcfb-7ce9-4470-9052-79bc13029408] Running
	I0610 11:53:35.415900   56769 system_pods.go:89] "metrics-server-569cc877fc-5zg8j" [e979b4b0-356d-479d-990f-d9e6e46a1a9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:35.415906   56769 system_pods.go:89] "storage-provisioner" [47aa143e-3545-492d-ac93-e62f0076e0f4] Running
	I0610 11:53:35.415913   56769 system_pods.go:126] duration metric: took 5.241641ms to wait for k8s-apps to be running ...
	I0610 11:53:35.415919   56769 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:53:35.415957   56769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:35.431179   56769 system_svc.go:56] duration metric: took 15.252123ms WaitForService to wait for kubelet
	I0610 11:53:35.431209   56769 kubeadm.go:576] duration metric: took 4m21.85536785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:53:35.431233   56769 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:53:35.433918   56769 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:53:35.433941   56769 node_conditions.go:123] node cpu capacity is 2
	I0610 11:53:35.433955   56769 node_conditions.go:105] duration metric: took 2.718538ms to run NodePressure ...
	I0610 11:53:35.433966   56769 start.go:240] waiting for startup goroutines ...
	I0610 11:53:35.433973   56769 start.go:245] waiting for cluster config update ...
	I0610 11:53:35.433982   56769 start.go:254] writing updated cluster config ...
	I0610 11:53:35.434234   56769 ssh_runner.go:195] Run: rm -f paused
	I0610 11:53:35.483552   56769 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:53:35.485459   56769 out.go:177] * Done! kubectl is now configured to use "embed-certs-832735" cluster and "default" namespace by default
	I0610 11:53:34.892890   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:53:34.893019   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:34.893195   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:32.987749   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:33.488008   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:33.988419   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.488002   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.988349   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:35.487347   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:35.987479   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:36.487972   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:36.987442   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:37.488069   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.337236   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:39.893441   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:39.893640   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:37.987751   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:38.488215   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:38.987955   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:39.487394   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:39.987431   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:40.488304   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:40.987779   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:41.488123   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:41.987438   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:42.487799   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:42.987548   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:43.084050   57572 kubeadm.go:1107] duration metric: took 12.761214532s to wait for elevateKubeSystemPrivileges
	W0610 11:53:43.084095   57572 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 11:53:43.084109   57572 kubeadm.go:393] duration metric: took 5m9.100565129s to StartCluster
	I0610 11:53:43.084128   57572 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:53:43.084215   57572 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:53:43.085889   57572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:53:43.086151   57572 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 11:53:43.087762   57572 out.go:177] * Verifying Kubernetes components...
	I0610 11:53:43.086215   57572 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 11:53:43.087796   57572 addons.go:69] Setting storage-provisioner=true in profile "no-preload-298179"
	I0610 11:53:43.087800   57572 addons.go:69] Setting default-storageclass=true in profile "no-preload-298179"
	I0610 11:53:43.087819   57572 addons.go:234] Setting addon storage-provisioner=true in "no-preload-298179"
	W0610 11:53:43.087825   57572 addons.go:243] addon storage-provisioner should already be in state true
	I0610 11:53:43.087832   57572 addons.go:69] Setting metrics-server=true in profile "no-preload-298179"
	I0610 11:53:43.087847   57572 addons.go:234] Setting addon metrics-server=true in "no-preload-298179"
	W0610 11:53:43.087856   57572 addons.go:243] addon metrics-server should already be in state true
	I0610 11:53:43.087826   57572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-298179"
	I0610 11:53:43.087878   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.089535   57572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:53:43.087856   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.086356   57572 config.go:182] Loaded profile config "no-preload-298179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:53:43.088180   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.088182   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.089687   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.089713   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.089869   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.089895   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.104587   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0610 11:53:43.104609   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0610 11:53:43.104586   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34031
	I0610 11:53:43.105501   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105566   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105508   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105983   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.105997   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106134   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.106153   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106160   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.106184   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106350   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106526   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106568   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106692   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.106890   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.106918   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.107118   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.107141   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.109645   57572 addons.go:234] Setting addon default-storageclass=true in "no-preload-298179"
	W0610 11:53:43.109664   57572 addons.go:243] addon default-storageclass should already be in state true
	I0610 11:53:43.109692   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.109914   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.109939   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.123209   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0610 11:53:43.123703   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.124011   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0610 11:53:43.124351   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.124372   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.124393   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.124777   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.124847   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.124869   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.124998   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.125208   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.125941   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.125994   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.126208   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35175
	I0610 11:53:43.126555   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.126915   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.127030   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.127038   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.129007   57572 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0610 11:53:43.127369   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.130329   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 11:53:43.130349   57572 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 11:53:43.130372   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.130501   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.132699   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.134359   57572 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:53:40.417218   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:43.489341   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:43.135801   57572 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:53:43.135818   57572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 11:53:43.135837   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.134045   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.135918   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.135948   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.134772   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.136159   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.136318   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.136621   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.139217   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.139636   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.139658   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.140091   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.140568   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.140865   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.141293   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.145179   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0610 11:53:43.145813   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.146336   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.146358   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.146675   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.146987   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.148747   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.149026   57572 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 11:53:43.149042   57572 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 11:53:43.149064   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.152048   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.152550   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.152572   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.152780   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.153021   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.153256   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.153406   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.293079   57572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:53:43.323699   57572 node_ready.go:35] waiting up to 6m0s for node "no-preload-298179" to be "Ready" ...
	I0610 11:53:43.331922   57572 node_ready.go:49] node "no-preload-298179" has status "Ready":"True"
	I0610 11:53:43.331946   57572 node_ready.go:38] duration metric: took 8.212434ms for node "no-preload-298179" to be "Ready" ...
	I0610 11:53:43.331956   57572 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:43.338721   57572 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:43.399175   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 11:53:43.399196   57572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0610 11:53:43.432920   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 11:53:43.432986   57572 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 11:53:43.453982   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:53:43.457146   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 11:53:43.500871   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 11:53:43.500900   57572 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 11:53:43.601303   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 11:53:44.376916   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.376992   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377083   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377105   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377298   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377377   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.377383   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.377301   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377394   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377403   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377405   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.377414   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377421   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377608   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377634   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.379039   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.379090   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.379054   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.397328   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.397354   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.397626   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.397644   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.880094   57572 pod_ready.go:92] pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.880129   57572 pod_ready.go:81] duration metric: took 1.541384627s for pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.880149   57572 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.901625   57572 pod_ready.go:92] pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.901649   57572 pod_ready.go:81] duration metric: took 21.492207ms for pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.901658   57572 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.907530   57572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.306184796s)
	I0610 11:53:44.907587   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.907603   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.907929   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.907991   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.908005   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.908015   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.908262   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.908301   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.908305   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.908315   57572 addons.go:475] Verifying addon metrics-server=true in "no-preload-298179"
	I0610 11:53:44.910622   57572 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0610 11:53:44.911848   57572 addons.go:510] duration metric: took 1.825630817s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0610 11:53:44.922534   57572 pod_ready.go:92] pod "etcd-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.922562   57572 pod_ready.go:81] duration metric: took 20.896794ms for pod "etcd-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.922576   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.947545   57572 pod_ready.go:92] pod "kube-apiserver-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.947569   57572 pod_ready.go:81] duration metric: took 24.984822ms for pod "kube-apiserver-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.947578   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.956216   57572 pod_ready.go:92] pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.956240   57572 pod_ready.go:81] duration metric: took 8.656291ms for pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.956256   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fhndh" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.326936   57572 pod_ready.go:92] pod "kube-proxy-fhndh" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:45.326977   57572 pod_ready.go:81] duration metric: took 370.713967ms for pod "kube-proxy-fhndh" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.326987   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.733487   57572 pod_ready.go:92] pod "kube-scheduler-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:45.733514   57572 pod_ready.go:81] duration metric: took 406.51925ms for pod "kube-scheduler-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.733525   57572 pod_ready.go:38] duration metric: took 2.401559014s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:45.733544   57572 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:53:45.733612   57572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:53:45.754814   57572 api_server.go:72] duration metric: took 2.668628419s to wait for apiserver process to appear ...
	I0610 11:53:45.754838   57572 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:53:45.754867   57572 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I0610 11:53:45.763742   57572 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I0610 11:53:45.765314   57572 api_server.go:141] control plane version: v1.30.1
	I0610 11:53:45.765345   57572 api_server.go:131] duration metric: took 10.498726ms to wait for apiserver health ...
	I0610 11:53:45.765356   57572 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:53:45.930764   57572 system_pods.go:59] 9 kube-system pods found
	I0610 11:53:45.930792   57572 system_pods.go:61] "coredns-7db6d8ff4d-9mqrm" [6269d670-dffa-4526-8117-0b44df04554a] Running
	I0610 11:53:45.930796   57572 system_pods.go:61] "coredns-7db6d8ff4d-f622z" [16cb4de3-afa9-4e45-bc85-e51273973808] Running
	I0610 11:53:45.930800   57572 system_pods.go:61] "etcd-no-preload-298179" [088f1950-04c4-49e0-b3e2-fe8b5f398a08] Running
	I0610 11:53:45.930806   57572 system_pods.go:61] "kube-apiserver-no-preload-298179" [11bad142-25ff-4aa9-9d9e-4b7cbb053bdd] Running
	I0610 11:53:45.930810   57572 system_pods.go:61] "kube-controller-manager-no-preload-298179" [ac29a4d9-6e9c-44fd-bb39-477255b94d0c] Running
	I0610 11:53:45.930814   57572 system_pods.go:61] "kube-proxy-fhndh" [50f848e7-44f6-4ab1-bf94-3189733abca2] Running
	I0610 11:53:45.930818   57572 system_pods.go:61] "kube-scheduler-no-preload-298179" [8569c375-b9bd-4a46-91ea-c6372056e45d] Running
	I0610 11:53:45.930826   57572 system_pods.go:61] "metrics-server-569cc877fc-jp7dr" [21136ae9-40d8-4857-aca5-47e3fa3b7e9c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:45.930831   57572 system_pods.go:61] "storage-provisioner" [783f523c-4c21-4ae0-bc18-9c391e7342b0] Running
	I0610 11:53:45.930843   57572 system_pods.go:74] duration metric: took 165.479385ms to wait for pod list to return data ...
	I0610 11:53:45.930855   57572 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:53:46.127109   57572 default_sa.go:45] found service account: "default"
	I0610 11:53:46.127145   57572 default_sa.go:55] duration metric: took 196.279685ms for default service account to be created ...
	I0610 11:53:46.127154   57572 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:53:46.330560   57572 system_pods.go:86] 9 kube-system pods found
	I0610 11:53:46.330587   57572 system_pods.go:89] "coredns-7db6d8ff4d-9mqrm" [6269d670-dffa-4526-8117-0b44df04554a] Running
	I0610 11:53:46.330592   57572 system_pods.go:89] "coredns-7db6d8ff4d-f622z" [16cb4de3-afa9-4e45-bc85-e51273973808] Running
	I0610 11:53:46.330597   57572 system_pods.go:89] "etcd-no-preload-298179" [088f1950-04c4-49e0-b3e2-fe8b5f398a08] Running
	I0610 11:53:46.330601   57572 system_pods.go:89] "kube-apiserver-no-preload-298179" [11bad142-25ff-4aa9-9d9e-4b7cbb053bdd] Running
	I0610 11:53:46.330605   57572 system_pods.go:89] "kube-controller-manager-no-preload-298179" [ac29a4d9-6e9c-44fd-bb39-477255b94d0c] Running
	I0610 11:53:46.330608   57572 system_pods.go:89] "kube-proxy-fhndh" [50f848e7-44f6-4ab1-bf94-3189733abca2] Running
	I0610 11:53:46.330612   57572 system_pods.go:89] "kube-scheduler-no-preload-298179" [8569c375-b9bd-4a46-91ea-c6372056e45d] Running
	I0610 11:53:46.330619   57572 system_pods.go:89] "metrics-server-569cc877fc-jp7dr" [21136ae9-40d8-4857-aca5-47e3fa3b7e9c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:46.330623   57572 system_pods.go:89] "storage-provisioner" [783f523c-4c21-4ae0-bc18-9c391e7342b0] Running
	I0610 11:53:46.330631   57572 system_pods.go:126] duration metric: took 203.472984ms to wait for k8s-apps to be running ...
	I0610 11:53:46.330640   57572 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:53:46.330681   57572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:46.345084   57572 system_svc.go:56] duration metric: took 14.432966ms WaitForService to wait for kubelet
	I0610 11:53:46.345113   57572 kubeadm.go:576] duration metric: took 3.258932349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:53:46.345131   57572 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:53:46.528236   57572 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:53:46.528269   57572 node_conditions.go:123] node cpu capacity is 2
	I0610 11:53:46.528278   57572 node_conditions.go:105] duration metric: took 183.142711ms to run NodePressure ...
	I0610 11:53:46.528288   57572 start.go:240] waiting for startup goroutines ...
	I0610 11:53:46.528294   57572 start.go:245] waiting for cluster config update ...
	I0610 11:53:46.528303   57572 start.go:254] writing updated cluster config ...
	I0610 11:53:46.528561   57572 ssh_runner.go:195] Run: rm -f paused
	I0610 11:53:46.576348   57572 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:53:46.578434   57572 out.go:177] * Done! kubectl is now configured to use "no-preload-298179" cluster and "default" namespace by default
	I0610 11:53:49.894176   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:49.894368   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:49.573292   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:52.641233   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:58.721260   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:01.793270   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:07.873263   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:09.895012   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:09.895413   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:10.945237   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:17.025183   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:20.097196   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:26.177217   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:29.249267   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:35.329193   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:38.401234   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:44.481254   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:47.553200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:49.896623   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:49.896849   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:49.896868   57945 kubeadm.go:309] 
	I0610 11:54:49.896922   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:54:49.897030   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:54:49.897053   57945 kubeadm.go:309] 
	I0610 11:54:49.897121   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:54:49.897157   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:54:49.897308   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:54:49.897322   57945 kubeadm.go:309] 
	I0610 11:54:49.897493   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:54:49.897553   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:54:49.897612   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:54:49.897623   57945 kubeadm.go:309] 
	I0610 11:54:49.897755   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:54:49.897866   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:54:49.897876   57945 kubeadm.go:309] 
	I0610 11:54:49.898032   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:54:49.898139   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:54:49.898253   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:54:49.898357   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:54:49.898365   57945 kubeadm.go:309] 
	I0610 11:54:49.899094   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:54:49.899208   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:54:49.899302   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0610 11:54:49.899441   57945 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0610 11:54:49.899502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:54:50.366528   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:54:50.380107   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:54:50.390067   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:54:50.390089   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:54:50.390132   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:54:50.399159   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:54:50.399222   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:54:50.409346   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:54:50.420402   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:54:50.420458   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:54:50.432874   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.444351   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:54:50.444430   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.458175   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:54:50.468538   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:54:50.468611   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:54:50.480033   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:54:50.543600   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:54:50.543653   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:54:50.682810   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:54:50.682970   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:54:50.683117   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:54:50.877761   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:54:50.879686   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:54:50.879788   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:54:50.879881   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:54:50.880010   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:54:50.880075   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:54:50.880145   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:54:50.880235   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:54:50.880334   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:54:50.880543   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:54:50.880654   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:54:50.880771   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:54:50.880835   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:54:50.880912   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:54:51.326073   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:54:51.537409   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:54:51.721400   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:54:51.884882   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:54:51.904377   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:54:51.906470   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:54:51.906560   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:54:52.065800   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:54:52.067657   57945 out.go:204]   - Booting up control plane ...
	I0610 11:54:52.067807   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:54:52.069012   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:54:52.070508   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:54:52.071669   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:54:52.074772   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:54:53.633176   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:56.705245   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:02.785227   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:05.857320   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:11.941172   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:15.009275   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:21.089235   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:24.161264   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:32.077145   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:55:32.077542   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:32.077740   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:30.241187   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:33.313200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:37.078114   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:37.078357   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:39.393317   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:42.465223   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:47.078706   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:47.078906   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:48.545281   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:51.617229   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:57.697600   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:00.769294   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:07.079053   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:07.079285   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:06.849261   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:09.925249   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:16.001299   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:19.077309   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:25.153200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:28.225172   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:31.226848   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:56:31.226888   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:31.227225   60146 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281114"
	I0610 11:56:31.227250   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:31.227458   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:31.229187   60146 machine.go:97] duration metric: took 4m37.416418256s to provisionDockerMachine
	I0610 11:56:31.229224   60146 fix.go:56] duration metric: took 4m37.441343871s for fixHost
	I0610 11:56:31.229230   60146 start.go:83] releasing machines lock for "default-k8s-diff-port-281114", held for 4m37.44136358s
	W0610 11:56:31.229245   60146 start.go:713] error starting host: provision: host is not running
	W0610 11:56:31.229314   60146 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0610 11:56:31.229325   60146 start.go:728] Will try again in 5 seconds ...
	I0610 11:56:36.230954   60146 start.go:360] acquireMachinesLock for default-k8s-diff-port-281114: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:56:36.231068   60146 start.go:364] duration metric: took 60.465µs to acquireMachinesLock for "default-k8s-diff-port-281114"
	I0610 11:56:36.231091   60146 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:56:36.231096   60146 fix.go:54] fixHost starting: 
	I0610 11:56:36.231372   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:56:36.231392   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:56:36.247286   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0610 11:56:36.247715   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:56:36.248272   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:56:36.248292   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:56:36.248585   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:56:36.248787   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:36.248939   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 11:56:36.250776   60146 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281114: state=Stopped err=<nil>
	I0610 11:56:36.250796   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	W0610 11:56:36.250950   60146 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:56:36.252942   60146 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-281114" ...
	I0610 11:56:36.254300   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Start
	I0610 11:56:36.254515   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring networks are active...
	I0610 11:56:36.255281   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring network default is active
	I0610 11:56:36.255626   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring network mk-default-k8s-diff-port-281114 is active
	I0610 11:56:36.256059   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Getting domain xml...
	I0610 11:56:36.256819   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Creating domain...
	I0610 11:56:37.521102   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting to get IP...
	I0610 11:56:37.522061   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.522494   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.522553   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:37.522473   61276 retry.go:31] will retry after 220.098219ms: waiting for machine to come up
	I0610 11:56:37.743932   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.744482   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.744513   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:37.744440   61276 retry.go:31] will retry after 292.471184ms: waiting for machine to come up
	I0610 11:56:38.038937   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.039497   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.039526   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:38.039454   61276 retry.go:31] will retry after 446.869846ms: waiting for machine to come up
	I0610 11:56:38.488091   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.488684   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.488708   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:38.488635   61276 retry.go:31] will retry after 607.787706ms: waiting for machine to come up
	I0610 11:56:39.098375   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.098845   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.098875   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:39.098795   61276 retry.go:31] will retry after 610.636143ms: waiting for machine to come up
	I0610 11:56:39.710692   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.711170   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.711198   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:39.711106   61276 retry.go:31] will retry after 598.132053ms: waiting for machine to come up
	I0610 11:56:40.310889   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:40.311397   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:40.311420   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:40.311328   61276 retry.go:31] will retry after 1.191704846s: waiting for machine to come up
	I0610 11:56:41.505131   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:41.505601   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:41.505631   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:41.505572   61276 retry.go:31] will retry after 937.081207ms: waiting for machine to come up
	I0610 11:56:42.444793   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:42.445368   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:42.445396   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:42.445338   61276 retry.go:31] will retry after 1.721662133s: waiting for machine to come up
	I0610 11:56:47.078993   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:47.079439   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:47.079463   57945 kubeadm.go:309] 
	I0610 11:56:47.079513   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:56:47.079597   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:56:47.079629   57945 kubeadm.go:309] 
	I0610 11:56:47.079678   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:56:47.079718   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:56:47.079865   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:56:47.079876   57945 kubeadm.go:309] 
	I0610 11:56:47.080014   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:56:47.080077   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:56:47.080132   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:56:47.080151   57945 kubeadm.go:309] 
	I0610 11:56:47.080280   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:56:47.080377   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:56:47.080389   57945 kubeadm.go:309] 
	I0610 11:56:47.080543   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:56:47.080663   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:56:47.080769   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:56:47.080862   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:56:47.080874   57945 kubeadm.go:309] 
	I0610 11:56:47.081877   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:56:47.082023   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:56:47.082137   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0610 11:56:47.082233   57945 kubeadm.go:393] duration metric: took 8m2.423366884s to StartCluster
	I0610 11:56:47.082273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:56:47.082325   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:56:47.130548   57945 cri.go:89] found id: ""
	I0610 11:56:47.130585   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.130596   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:56:47.130603   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:56:47.130673   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:56:47.170087   57945 cri.go:89] found id: ""
	I0610 11:56:47.170124   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.170136   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:56:47.170144   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:56:47.170219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:56:47.210394   57945 cri.go:89] found id: ""
	I0610 11:56:47.210430   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.210442   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:56:47.210450   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:56:47.210532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:56:47.246002   57945 cri.go:89] found id: ""
	I0610 11:56:47.246032   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.246043   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:56:47.246051   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:56:47.246119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:56:47.282333   57945 cri.go:89] found id: ""
	I0610 11:56:47.282361   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.282369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:56:47.282375   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:56:47.282432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:56:47.316205   57945 cri.go:89] found id: ""
	I0610 11:56:47.316241   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.316254   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:56:47.316262   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:56:47.316323   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:56:47.356012   57945 cri.go:89] found id: ""
	I0610 11:56:47.356047   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.356060   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:56:47.356069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:56:47.356140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:56:47.404624   57945 cri.go:89] found id: ""
	I0610 11:56:47.404655   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.404666   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:56:47.404678   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:56:47.404694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:56:47.475236   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:56:47.475282   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:56:47.493382   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:56:47.493418   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:56:47.589894   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:56:47.589918   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:56:47.589934   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:56:47.726080   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:56:47.726123   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0610 11:56:47.770399   57945 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0610 11:56:47.770451   57945 out.go:239] * 
	W0610 11:56:47.770532   57945 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.770558   57945 out.go:239] * 
	W0610 11:56:47.771459   57945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:56:47.775172   57945 out.go:177] 
	W0610 11:56:47.776444   57945 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.776509   57945 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0610 11:56:47.776545   57945 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0610 11:56:47.778306   57945 out.go:177] 
	I0610 11:56:44.168288   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:44.168809   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:44.168832   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:44.168762   61276 retry.go:31] will retry after 2.181806835s: waiting for machine to come up
	I0610 11:56:46.352210   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:46.352736   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:46.352764   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:46.352688   61276 retry.go:31] will retry after 2.388171324s: waiting for machine to come up
	I0610 11:56:48.744345   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:48.744853   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:48.744890   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:48.744815   61276 retry.go:31] will retry after 2.54250043s: waiting for machine to come up
	I0610 11:56:51.288816   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:51.289222   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:51.289252   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:51.289190   61276 retry.go:31] will retry after 4.525493142s: waiting for machine to come up
	I0610 11:56:55.819862   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.820393   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Found IP for machine: 192.168.50.222
	I0610 11:56:55.820416   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Reserving static IP address...
	I0610 11:56:55.820433   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has current primary IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.820941   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-281114", mac: "52:54:00:23:06:35", ip: "192.168.50.222"} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.820984   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Reserved static IP address: 192.168.50.222
	I0610 11:56:55.821000   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | skip adding static IP to network mk-default-k8s-diff-port-281114 - found existing host DHCP lease matching {name: "default-k8s-diff-port-281114", mac: "52:54:00:23:06:35", ip: "192.168.50.222"}
	I0610 11:56:55.821012   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Getting to WaitForSSH function...
	I0610 11:56:55.821028   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for SSH to be available...
	I0610 11:56:55.823149   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.823499   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.823530   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.823680   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Using SSH client type: external
	I0610 11:56:55.823717   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa (-rw-------)
	I0610 11:56:55.823750   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 11:56:55.823764   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | About to run SSH command:
	I0610 11:56:55.823778   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | exit 0
	I0610 11:56:55.949264   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | SSH cmd err, output: <nil>: 
	I0610 11:56:55.949623   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetConfigRaw
	I0610 11:56:55.950371   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:55.953239   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.953602   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.953746   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.953874   60146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/config.json ...
	I0610 11:56:55.954172   60146 machine.go:94] provisionDockerMachine start ...
	I0610 11:56:55.954203   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:55.954415   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:55.956837   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.957344   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.957361   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.957521   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:55.957710   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:55.957887   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:55.958055   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:55.958211   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:55.958425   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:55.958445   60146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:56:56.061295   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 11:56:56.061331   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.061559   60146 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281114"
	I0610 11:56:56.061588   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.061787   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.064578   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.064938   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.064975   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.065131   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.065383   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.065565   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.065681   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.065874   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.066079   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.066094   60146 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-281114 && echo "default-k8s-diff-port-281114" | sudo tee /etc/hostname
	I0610 11:56:56.183602   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-281114
	
	I0610 11:56:56.183626   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.186613   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.186986   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.187016   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.187260   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.187472   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.187656   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.187812   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.187993   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.188192   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.188220   60146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-281114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-281114/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-281114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:56:56.298027   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:56:56.298057   60146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 11:56:56.298076   60146 buildroot.go:174] setting up certificates
	I0610 11:56:56.298083   60146 provision.go:84] configureAuth start
	I0610 11:56:56.298094   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.298385   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:56.301219   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.301584   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.301614   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.301816   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.304010   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.304412   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.304438   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.304593   60146 provision.go:143] copyHostCerts
	I0610 11:56:56.304668   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 11:56:56.304681   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:56:56.304765   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 11:56:56.304874   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 11:56:56.304884   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:56:56.304924   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 11:56:56.305040   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 11:56:56.305050   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:56:56.305084   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 11:56:56.305153   60146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-281114 san=[127.0.0.1 192.168.50.222 default-k8s-diff-port-281114 localhost minikube]
	I0610 11:56:56.411016   60146 provision.go:177] copyRemoteCerts
	I0610 11:56:56.411072   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:56:56.411093   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.413736   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.414075   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.414122   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.414292   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.414498   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.414686   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.414785   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:56.495039   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 11:56:56.519750   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:56:56.543202   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0610 11:56:56.566420   60146 provision.go:87] duration metric: took 268.326859ms to configureAuth
	I0610 11:56:56.566449   60146 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:56:56.566653   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:56:56.566732   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.569742   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.570135   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.570159   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.570411   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.570635   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.570815   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.570969   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.571169   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.571334   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.571350   60146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 11:56:56.846705   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 11:56:56.846727   60146 machine.go:97] duration metric: took 892.536744ms to provisionDockerMachine
	I0610 11:56:56.846741   60146 start.go:293] postStartSetup for "default-k8s-diff-port-281114" (driver="kvm2")
	I0610 11:56:56.846753   60146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:56:56.846795   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:56.847123   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:56:56.847150   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.849968   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.850300   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.850322   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.850518   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.850706   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.850889   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.851010   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:56.935027   60146 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:56:56.939465   60146 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:56:56.939489   60146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 11:56:56.939558   60146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 11:56:56.939641   60146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 11:56:56.939728   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:56:56.948993   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:56:56.974611   60146 start.go:296] duration metric: took 127.85527ms for postStartSetup
	I0610 11:56:56.974655   60146 fix.go:56] duration metric: took 20.74355824s for fixHost
	I0610 11:56:56.974673   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.978036   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.978438   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.978471   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.978612   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.978804   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.978984   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.979157   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.979343   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.979506   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.979524   60146 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:56:57.081416   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718020617.058533839
	
	I0610 11:56:57.081444   60146 fix.go:216] guest clock: 1718020617.058533839
	I0610 11:56:57.081454   60146 fix.go:229] Guest: 2024-06-10 11:56:57.058533839 +0000 UTC Remote: 2024-06-10 11:56:56.974658577 +0000 UTC m=+303.333936196 (delta=83.875262ms)
	I0610 11:56:57.081476   60146 fix.go:200] guest clock delta is within tolerance: 83.875262ms
	I0610 11:56:57.081482   60146 start.go:83] releasing machines lock for "default-k8s-diff-port-281114", held for 20.850403793s
	I0610 11:56:57.081499   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.081775   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:57.084904   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.085408   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.085442   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.085619   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086222   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086432   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086519   60146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:56:57.086571   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:57.086660   60146 ssh_runner.go:195] Run: cat /version.json
	I0610 11:56:57.086694   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:57.089544   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.089869   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.089904   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.089931   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.090091   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:57.090259   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:57.090362   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.090388   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.090444   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:57.090539   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:57.090613   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:57.090667   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:57.090806   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:57.090969   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:57.215361   60146 ssh_runner.go:195] Run: systemctl --version
	I0610 11:56:57.221479   60146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 11:56:57.363318   60146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 11:56:57.369389   60146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:56:57.369465   60146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:56:57.385195   60146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:56:57.385217   60146 start.go:494] detecting cgroup driver to use...
	I0610 11:56:57.385284   60146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:56:57.404923   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:56:57.420158   60146 docker.go:217] disabling cri-docker service (if available) ...
	I0610 11:56:57.420204   60146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 11:56:57.434385   60146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 11:56:57.448340   60146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 11:56:57.574978   60146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 11:56:57.714523   60146 docker.go:233] disabling docker service ...
	I0610 11:56:57.714620   60146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 11:56:57.729914   60146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 11:56:57.742557   60146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 11:56:57.885770   60146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 11:56:58.018120   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 11:56:58.031606   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:56:58.049312   60146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 11:56:58.049389   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.059800   60146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 11:56:58.059877   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.071774   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.082332   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.093474   60146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:56:58.104231   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.114328   60146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.131812   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.142612   60146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:56:58.152681   60146 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 11:56:58.152750   60146 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 11:56:58.166120   60146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:56:58.176281   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:56:58.306558   60146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 11:56:58.446379   60146 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 11:56:58.446460   60146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 11:56:58.452523   60146 start.go:562] Will wait 60s for crictl version
	I0610 11:56:58.452619   60146 ssh_runner.go:195] Run: which crictl
	I0610 11:56:58.456611   60146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:56:58.503496   60146 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 11:56:58.503581   60146 ssh_runner.go:195] Run: crio --version
	I0610 11:56:58.532834   60146 ssh_runner.go:195] Run: crio --version
	I0610 11:56:58.562697   60146 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 11:56:58.563974   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:58.566760   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:58.567107   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:58.567142   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:58.567408   60146 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0610 11:56:58.571671   60146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:56:58.584423   60146 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:56:58.584535   60146 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:56:58.584588   60146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:56:58.622788   60146 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0610 11:56:58.622862   60146 ssh_runner.go:195] Run: which lz4
	I0610 11:56:58.627561   60146 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 11:56:58.632560   60146 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 11:56:58.632595   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0610 11:56:59.943375   60146 crio.go:462] duration metric: took 1.315853744s to copy over tarball
	I0610 11:56:59.943444   60146 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 11:57:02.167265   60146 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.223791523s)
	I0610 11:57:02.167299   60146 crio.go:469] duration metric: took 2.223894548s to extract the tarball
	I0610 11:57:02.167308   60146 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 11:57:02.206288   60146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:57:02.250013   60146 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 11:57:02.250034   60146 cache_images.go:84] Images are preloaded, skipping loading
	I0610 11:57:02.250041   60146 kubeadm.go:928] updating node { 192.168.50.222 8444 v1.30.1 crio true true} ...
	I0610 11:57:02.250163   60146 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-281114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:57:02.250261   60146 ssh_runner.go:195] Run: crio config
	I0610 11:57:02.305797   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:57:02.305822   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:57:02.305838   60146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:57:02.305867   60146 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.222 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-281114 NodeName:default-k8s-diff-port-281114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 11:57:02.306030   60146 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.222
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-281114"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:57:02.306111   60146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:57:02.316522   60146 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:57:02.316585   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 11:57:02.326138   60146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0610 11:57:02.342685   60146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:57:02.359693   60146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0610 11:57:02.375771   60146 ssh_runner.go:195] Run: grep 192.168.50.222	control-plane.minikube.internal$ /etc/hosts
	I0610 11:57:02.379280   60146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:57:02.390797   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:57:02.511286   60146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:57:02.529051   60146 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114 for IP: 192.168.50.222
	I0610 11:57:02.529076   60146 certs.go:194] generating shared ca certs ...
	I0610 11:57:02.529095   60146 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:57:02.529281   60146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 11:57:02.529358   60146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 11:57:02.529373   60146 certs.go:256] generating profile certs ...
	I0610 11:57:02.529492   60146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/client.key
	I0610 11:57:02.529576   60146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.key.d35a2a33
	I0610 11:57:02.529626   60146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.key
	I0610 11:57:02.529769   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 11:57:02.529810   60146 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 11:57:02.529823   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 11:57:02.529857   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 11:57:02.529893   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 11:57:02.529924   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 11:57:02.529981   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:57:02.531166   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:57:02.570183   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:57:02.607339   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:57:02.653464   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 11:57:02.694329   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0610 11:57:02.722420   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 11:57:02.747321   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:57:02.772755   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:57:02.797241   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:57:02.821892   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 11:57:02.846925   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 11:57:02.870986   60146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:57:02.889088   60146 ssh_runner.go:195] Run: openssl version
	I0610 11:57:02.894820   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 11:57:02.906689   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.911048   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.911095   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.916866   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 11:57:02.928405   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 11:57:02.941254   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.945849   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.945899   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.951833   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:57:02.963661   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:57:02.975117   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.979667   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.979731   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.985212   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:57:02.997007   60146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:57:03.001498   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 11:57:03.007549   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 11:57:03.013717   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 11:57:03.019947   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 11:57:03.025890   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 11:57:03.031443   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 11:57:03.036936   60146 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:57:03.037056   60146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 11:57:03.037111   60146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:57:03.088497   60146 cri.go:89] found id: ""
	I0610 11:57:03.088555   60146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0610 11:57:03.099358   60146 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 11:57:03.099381   60146 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 11:57:03.099386   60146 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 11:57:03.099436   60146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 11:57:03.109092   60146 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 11:57:03.110113   60146 kubeconfig.go:125] found "default-k8s-diff-port-281114" server: "https://192.168.50.222:8444"
	I0610 11:57:03.112565   60146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 11:57:03.122338   60146 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.222
	I0610 11:57:03.122370   60146 kubeadm.go:1154] stopping kube-system containers ...
	I0610 11:57:03.122392   60146 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0610 11:57:03.122453   60146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:57:03.159369   60146 cri.go:89] found id: ""
	I0610 11:57:03.159470   60146 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 11:57:03.176704   60146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:57:03.186957   60146 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:57:03.186977   60146 kubeadm.go:156] found existing configuration files:
	
	I0610 11:57:03.187040   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0610 11:57:03.196318   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:57:03.196397   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:57:03.205630   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0610 11:57:03.214480   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:57:03.214538   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:57:03.223939   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0610 11:57:03.232372   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:57:03.232422   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:57:03.241846   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0610 11:57:03.251014   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:57:03.251092   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:57:03.260115   60146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:57:03.269792   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:03.388582   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.274314   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.473968   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.531884   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.618371   60146 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:57:04.618464   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:05.118733   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:05.619107   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:06.118937   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:06.138176   60146 api_server.go:72] duration metric: took 1.519803379s to wait for apiserver process to appear ...
	I0610 11:57:06.138205   60146 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:57:06.138223   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.201655   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 11:57:09.201680   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 11:57:09.201691   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.305898   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:09.305934   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:09.639319   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.644006   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:09.644041   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:10.138712   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:10.144989   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:10.145024   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:10.638505   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:10.642825   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:10.642861   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:11.138360   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:11.143062   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:11.143087   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:11.639058   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:11.643394   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:11.643419   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:12.139125   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:12.143425   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:12.143452   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:12.639074   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:12.644121   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 200:
	ok
	I0610 11:57:12.650538   60146 api_server.go:141] control plane version: v1.30.1
	I0610 11:57:12.650570   60146 api_server.go:131] duration metric: took 6.512357672s to wait for apiserver health ...
	I0610 11:57:12.650581   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:57:12.650590   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:57:12.652548   60146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 11:57:12.653918   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 11:57:12.664536   60146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 11:57:12.685230   60146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:57:12.694511   60146 system_pods.go:59] 8 kube-system pods found
	I0610 11:57:12.694546   60146 system_pods.go:61] "coredns-7db6d8ff4d-5ngxc" [26f3438c-a6a2-43d5-b79d-991752b4cc10] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 11:57:12.694561   60146 system_pods.go:61] "etcd-default-k8s-diff-port-281114" [e8a3dc04-a9f0-4670-8256-7a0a617958ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 11:57:12.694610   60146 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281114" [45080cf7-94ee-4c55-a3b4-cfa8d3b4edbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 11:57:12.694626   60146 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281114" [3f51cb0c-bb90-4847-acd4-0ed8a58608ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 11:57:12.694633   60146 system_pods.go:61] "kube-proxy-896ts" [13b994b7-8d0e-4e3d-9902-3bdd7a9ab949] Running
	I0610 11:57:12.694648   60146 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281114" [c205a8b5-e970-40ed-83d7-462781bcf41f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 11:57:12.694659   60146 system_pods.go:61] "metrics-server-569cc877fc-jhv6f" [60a2e6ad-714a-4c6d-b586-232d130397a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:57:12.694665   60146 system_pods.go:61] "storage-provisioner" [b54a4493-2c6d-4a5e-b74c-ba9863979688] Running
	I0610 11:57:12.694675   60146 system_pods.go:74] duration metric: took 9.424371ms to wait for pod list to return data ...
	I0610 11:57:12.694687   60146 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:57:12.697547   60146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:57:12.697571   60146 node_conditions.go:123] node cpu capacity is 2
	I0610 11:57:12.697583   60146 node_conditions.go:105] duration metric: took 2.887217ms to run NodePressure ...
	I0610 11:57:12.697633   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:12.966838   60146 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0610 11:57:12.971616   60146 kubeadm.go:733] kubelet initialised
	I0610 11:57:12.971641   60146 kubeadm.go:734] duration metric: took 4.781436ms waiting for restarted kubelet to initialise ...
	I0610 11:57:12.971649   60146 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:57:12.977162   60146 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:14.984174   60146 pod_ready.go:102] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:16.984365   60146 pod_ready.go:102] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:18.985423   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.985447   60146 pod_ready.go:81] duration metric: took 6.008259879s for pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.985459   60146 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.992228   60146 pod_ready.go:92] pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.992249   60146 pod_ready.go:81] duration metric: took 6.782049ms for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.992261   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.998328   60146 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.998354   60146 pod_ready.go:81] duration metric: took 6.080448ms for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.998363   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:21.004441   60146 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:23.005035   60146 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:23.505290   60146 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.505316   60146 pod_ready.go:81] duration metric: took 4.506946099s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.505326   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-896ts" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.510714   60146 pod_ready.go:92] pod "kube-proxy-896ts" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.510733   60146 pod_ready.go:81] duration metric: took 5.402289ms for pod "kube-proxy-896ts" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.510741   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.515120   60146 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.515138   60146 pod_ready.go:81] duration metric: took 4.391539ms for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.515145   60146 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:25.522456   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:28.021723   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:30.521428   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:32.521868   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:35.020800   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:37.021406   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:39.022230   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:41.026828   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:43.521675   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:46.021385   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:48.521085   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:50.521489   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:53.020867   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:55.021644   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:57.521383   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:59.521662   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:02.021864   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:04.521572   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:07.021580   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:09.521128   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:11.522117   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:14.021270   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:16.022304   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:18.521534   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:21.021061   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:23.021721   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:25.521779   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:28.021005   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:30.023892   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:32.521068   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:35.022247   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:37.022812   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:39.521194   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:41.521813   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:43.521847   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:46.021646   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:48.521791   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:51.020662   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:53.020752   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:55.021736   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:57.521819   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:00.021201   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:02.521497   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:05.021115   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:07.521673   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:10.022328   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:12.521244   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:15.020407   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:17.021142   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:19.021398   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:21.021949   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:23.022714   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:25.521324   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:27.523011   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:30.021380   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:32.021456   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:34.021713   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:36.523229   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:39.023269   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:41.521241   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:43.522882   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:46.021368   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:48.021781   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:50.022979   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:52.522357   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:55.022181   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:57.521630   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:00.022732   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:02.524425   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:05.021218   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:07.021736   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:09.521121   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:12.022455   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:14.023274   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:16.521626   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:19.021624   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:21.021728   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:23.022457   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:25.023406   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:27.523393   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:30.022146   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:32.520816   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:34.522050   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:36.522345   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:39.021544   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:41.022726   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:43.520941   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:45.521181   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:47.522257   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:49.522829   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:51.523346   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:54.020982   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:56.021367   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:58.021467   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:00.021643   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:02.021791   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:04.021864   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:06.021968   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:08.521556   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:10.521588   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:12.521870   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:15.025925   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:17.523018   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:20.022903   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:22.521723   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:23.515523   60146 pod_ready.go:81] duration metric: took 4m0.000361045s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" ...
	E0610 12:01:23.515558   60146 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0610 12:01:23.515582   60146 pod_ready.go:38] duration metric: took 4m10.543923644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:01:23.515614   60146 kubeadm.go:591] duration metric: took 4m20.4162222s to restartPrimaryControlPlane
	W0610 12:01:23.515715   60146 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 12:01:23.515751   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 12:01:54.687867   60146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.172093979s)
	I0610 12:01:54.687931   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:01:54.704702   60146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 12:01:54.714940   60146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 12:01:54.724675   60146 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:01:54.724702   60146 kubeadm.go:156] found existing configuration files:
	
	I0610 12:01:54.724749   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0610 12:01:54.734652   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:01:54.734726   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 12:01:54.744642   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0610 12:01:54.755297   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:01:54.755375   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 12:01:54.765800   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0610 12:01:54.775568   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:01:54.775636   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 12:01:54.785076   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0610 12:01:54.793645   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:01:54.793706   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 12:01:54.803137   60146 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 12:01:54.855022   60146 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 12:01:54.855094   60146 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 12:01:54.995399   60146 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 12:01:54.995511   60146 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 12:01:54.995622   60146 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 12:01:55.194136   60146 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:01:55.196296   60146 out.go:204]   - Generating certificates and keys ...
	I0610 12:01:55.196396   60146 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 12:01:55.196475   60146 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 12:01:55.196575   60146 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 12:01:55.196680   60146 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 12:01:55.196792   60146 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 12:01:55.196874   60146 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 12:01:55.196984   60146 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 12:01:55.197077   60146 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 12:01:55.197158   60146 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 12:01:55.197231   60146 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 12:01:55.197265   60146 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 12:01:55.197320   60146 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:01:55.299197   60146 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:01:55.490367   60146 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:01:55.751377   60146 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:01:55.863144   60146 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:01:56.112395   60146 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:01:56.113059   60146 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:01:56.118410   60146 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:01:56.120277   60146 out.go:204]   - Booting up control plane ...
	I0610 12:01:56.120416   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:01:56.120503   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:01:56.120565   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:01:56.138057   60146 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:01:56.138509   60146 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:01:56.138563   60146 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 12:01:56.263559   60146 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 12:01:56.263688   60146 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:01:57.264829   60146 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001316355s
	I0610 12:01:57.264927   60146 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 12:02:02.267632   60146 kubeadm.go:309] [api-check] The API server is healthy after 5.001644567s
	I0610 12:02:02.282693   60146 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 12:02:02.305741   60146 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 12:02:02.341283   60146 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 12:02:02.341527   60146 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-281114 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 12:02:02.355256   60146 kubeadm.go:309] [bootstrap-token] Using token: mkpvnr.wlx5xvctjlg8pi72
	I0610 12:02:02.356920   60146 out.go:204]   - Configuring RBAC rules ...
	I0610 12:02:02.357052   60146 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 12:02:02.367773   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 12:02:02.376921   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 12:02:02.386582   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 12:02:02.390887   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 12:02:02.399245   60146 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 12:02:02.674008   60146 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 12:02:03.137504   60146 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 12:02:03.673560   60146 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 12:02:03.674588   60146 kubeadm.go:309] 
	I0610 12:02:03.674677   60146 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 12:02:03.674694   60146 kubeadm.go:309] 
	I0610 12:02:03.674774   60146 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 12:02:03.674784   60146 kubeadm.go:309] 
	I0610 12:02:03.674813   60146 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 12:02:03.674924   60146 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 12:02:03.675014   60146 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 12:02:03.675026   60146 kubeadm.go:309] 
	I0610 12:02:03.675128   60146 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 12:02:03.675150   60146 kubeadm.go:309] 
	I0610 12:02:03.675225   60146 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 12:02:03.675234   60146 kubeadm.go:309] 
	I0610 12:02:03.675344   60146 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 12:02:03.675460   60146 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 12:02:03.675587   60146 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 12:02:03.677879   60146 kubeadm.go:309] 
	I0610 12:02:03.677961   60146 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 12:02:03.678057   60146 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 12:02:03.678068   60146 kubeadm.go:309] 
	I0610 12:02:03.678160   60146 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token mkpvnr.wlx5xvctjlg8pi72 \
	I0610 12:02:03.678304   60146 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 12:02:03.678338   60146 kubeadm.go:309] 	--control-plane 
	I0610 12:02:03.678348   60146 kubeadm.go:309] 
	I0610 12:02:03.678446   60146 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 12:02:03.678460   60146 kubeadm.go:309] 
	I0610 12:02:03.678580   60146 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token mkpvnr.wlx5xvctjlg8pi72 \
	I0610 12:02:03.678726   60146 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 12:02:03.678869   60146 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:02:03.678886   60146 cni.go:84] Creating CNI manager for ""
	I0610 12:02:03.678896   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 12:02:03.681019   60146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 12:02:03.682415   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 12:02:03.693028   60146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 12:02:03.711436   60146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 12:02:03.711534   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:03.711611   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-281114 minikube.k8s.io/updated_at=2024_06_10T12_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=default-k8s-diff-port-281114 minikube.k8s.io/primary=true
	I0610 12:02:03.888463   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:03.926946   60146 ops.go:34] apiserver oom_adj: -16
	I0610 12:02:04.389105   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:04.888545   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:05.389096   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:05.888853   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:06.389522   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:06.889491   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:07.389417   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:07.889485   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:08.388869   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:08.889480   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:09.389130   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:09.889052   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:10.389053   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:10.889177   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:11.388985   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:11.889405   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:12.388805   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:12.889139   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:13.389072   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:13.888843   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:14.389349   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:14.888798   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:15.388800   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:15.888491   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:16.389394   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:16.889175   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:17.007766   60146 kubeadm.go:1107] duration metric: took 13.296278569s to wait for elevateKubeSystemPrivileges
	W0610 12:02:17.007804   60146 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 12:02:17.007813   60146 kubeadm.go:393] duration metric: took 5m13.970894294s to StartCluster
	I0610 12:02:17.007835   60146 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:02:17.007914   60146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 12:02:17.009456   60146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:02:17.009751   60146 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 12:02:17.011669   60146 out.go:177] * Verifying Kubernetes components...
	I0610 12:02:17.009833   60146 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 12:02:17.011705   60146 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013481   60146 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.013496   60146 addons.go:243] addon storage-provisioner should already be in state true
	I0610 12:02:17.013539   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.011715   60146 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013612   60146 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.013628   60146 addons.go:243] addon metrics-server should already be in state true
	I0610 12:02:17.013669   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.009996   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:02:17.011717   60146 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013437   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:02:17.013792   60146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-281114"
	I0610 12:02:17.013961   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014009   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.014043   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014066   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.014174   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014211   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.030604   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43907
	I0610 12:02:17.031126   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.031701   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.031729   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.032073   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.032272   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.034510   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0610 12:02:17.034557   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42127
	I0610 12:02:17.034950   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.035130   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.035437   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.035459   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.035888   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.035968   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.035986   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.036820   60146 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.036839   60146 addons.go:243] addon default-storageclass should already be in state true
	I0610 12:02:17.036865   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.037323   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.037345   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.038068   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.038408   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.038428   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.039402   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.039436   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.052901   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0610 12:02:17.053390   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.053936   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.053959   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.054226   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
	I0610 12:02:17.054303   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.054569   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.054905   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.054933   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.055019   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.055040   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.055448   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.055637   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.057623   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.059785   60146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:02:17.058684   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0610 12:02:17.060310   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.061277   60146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:02:17.061292   60146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 12:02:17.061311   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.061738   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.061762   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.062097   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.062405   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.064169   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.065635   60146 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0610 12:02:17.065251   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.066901   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 12:02:17.065677   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.066921   60146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 12:02:17.066945   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.066952   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.065921   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.067144   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.067267   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.067437   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.070722   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.071110   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.071125   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.071422   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.071582   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.071714   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.072048   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.073784   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46447
	I0610 12:02:17.074157   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.074645   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.074659   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.074986   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.075129   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.076879   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.077138   60146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 12:02:17.077153   60146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 12:02:17.077170   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.080253   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.080667   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.080698   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.080862   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.081088   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.081280   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.081466   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.226805   60146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:02:17.257188   60146 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-281114" to be "Ready" ...
	I0610 12:02:17.266803   60146 node_ready.go:49] node "default-k8s-diff-port-281114" has status "Ready":"True"
	I0610 12:02:17.266829   60146 node_ready.go:38] duration metric: took 9.610473ms for node "default-k8s-diff-port-281114" to be "Ready" ...
	I0610 12:02:17.266840   60146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:02:17.273132   60146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:17.327416   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 12:02:17.327442   60146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0610 12:02:17.366670   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:02:17.367685   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 12:02:17.378833   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 12:02:17.378858   60146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 12:02:17.436533   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 12:02:17.436558   60146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 12:02:17.490426   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 12:02:18.279491   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.279516   60146 pod_ready.go:81] duration metric: took 1.006353706s for pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.279527   60146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.286003   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.286024   60146 pod_ready.go:81] duration metric: took 6.488693ms for pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.286036   60146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.295995   60146 pod_ready.go:92] pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.296015   60146 pod_ready.go:81] duration metric: took 9.973573ms for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.296024   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.302383   60146 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.302407   60146 pod_ready.go:81] duration metric: took 6.376673ms for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.302418   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.421208   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.054498973s)
	I0610 12:02:18.421244   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.053533062s)
	I0610 12:02:18.421270   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421278   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421285   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421290   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421645   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.421691   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.421706   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.421715   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421717   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.421723   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421726   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.421734   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421743   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.422083   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.422103   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.422122   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.422123   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.422132   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.453377   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.453408   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.453803   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.453806   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.453831   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.475839   60146 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.475867   60146 pod_ready.go:81] duration metric: took 173.440125ms for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.475881   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wh756" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.673586   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183120727s)
	I0610 12:02:18.673646   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.673662   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.673961   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.674001   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.674010   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.674020   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.674045   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.674315   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.674356   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.674365   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.674376   60146 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-281114"
	I0610 12:02:18.676402   60146 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0610 12:02:18.677734   60146 addons.go:510] duration metric: took 1.667897142s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0610 12:02:19.660297   60146 pod_ready.go:92] pod "kube-proxy-wh756" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:19.660327   60146 pod_ready.go:81] duration metric: took 1.184438894s for pod "kube-proxy-wh756" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:19.660340   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:20.060583   60146 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:20.060607   60146 pod_ready.go:81] duration metric: took 400.25949ms for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:20.060616   60146 pod_ready.go:38] duration metric: took 2.793765456s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:02:20.060634   60146 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:02:20.060693   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:02:20.076416   60146 api_server.go:72] duration metric: took 3.066630137s to wait for apiserver process to appear ...
	I0610 12:02:20.076441   60146 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:02:20.076462   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 12:02:20.081614   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 200:
	ok
	I0610 12:02:20.082567   60146 api_server.go:141] control plane version: v1.30.1
	I0610 12:02:20.082589   60146 api_server.go:131] duration metric: took 6.142085ms to wait for apiserver health ...
	I0610 12:02:20.082597   60146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:02:20.263766   60146 system_pods.go:59] 9 kube-system pods found
	I0610 12:02:20.263803   60146 system_pods.go:61] "coredns-7db6d8ff4d-5fgtk" [03d948ca-122a-4042-8371-8a9422c187bc] Running
	I0610 12:02:20.263808   60146 system_pods.go:61] "coredns-7db6d8ff4d-fg8xx" [e91ae09c-8821-4843-8c0d-ea734433c213] Running
	I0610 12:02:20.263815   60146 system_pods.go:61] "etcd-default-k8s-diff-port-281114" [110985f7-c57e-453d-8bda-c5104d879eb4] Running
	I0610 12:02:20.263821   60146 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281114" [e62181ca-648e-4d5f-b2a7-00bed06f3bd2] Running
	I0610 12:02:20.263827   60146 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281114" [109f02bd-8c9c-40f6-98e8-5cf2b6d97deb] Running
	I0610 12:02:20.263832   60146 system_pods.go:61] "kube-proxy-wh756" [57cbf3d6-c149-4ae1-84d3-6df6a53ea091] Running
	I0610 12:02:20.263838   60146 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281114" [00889b82-f4fc-4a98-86cd-ab1028dc4461] Running
	I0610 12:02:20.263848   60146 system_pods.go:61] "metrics-server-569cc877fc-j58s9" [f1c91612-b967-447e-bc71-13ba0d11864b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 12:02:20.263854   60146 system_pods.go:61] "storage-provisioner" [8df0a38c-5e91-4b10-a303-c4eff9545669] Running
	I0610 12:02:20.263866   60146 system_pods.go:74] duration metric: took 181.261717ms to wait for pod list to return data ...
	I0610 12:02:20.263878   60146 default_sa.go:34] waiting for default service account to be created ...
	I0610 12:02:20.460812   60146 default_sa.go:45] found service account: "default"
	I0610 12:02:20.460848   60146 default_sa.go:55] duration metric: took 196.961501ms for default service account to be created ...
	I0610 12:02:20.460860   60146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 12:02:20.664565   60146 system_pods.go:86] 9 kube-system pods found
	I0610 12:02:20.664591   60146 system_pods.go:89] "coredns-7db6d8ff4d-5fgtk" [03d948ca-122a-4042-8371-8a9422c187bc] Running
	I0610 12:02:20.664596   60146 system_pods.go:89] "coredns-7db6d8ff4d-fg8xx" [e91ae09c-8821-4843-8c0d-ea734433c213] Running
	I0610 12:02:20.664601   60146 system_pods.go:89] "etcd-default-k8s-diff-port-281114" [110985f7-c57e-453d-8bda-c5104d879eb4] Running
	I0610 12:02:20.664606   60146 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-281114" [e62181ca-648e-4d5f-b2a7-00bed06f3bd2] Running
	I0610 12:02:20.664610   60146 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-281114" [109f02bd-8c9c-40f6-98e8-5cf2b6d97deb] Running
	I0610 12:02:20.664614   60146 system_pods.go:89] "kube-proxy-wh756" [57cbf3d6-c149-4ae1-84d3-6df6a53ea091] Running
	I0610 12:02:20.664618   60146 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-281114" [00889b82-f4fc-4a98-86cd-ab1028dc4461] Running
	I0610 12:02:20.664626   60146 system_pods.go:89] "metrics-server-569cc877fc-j58s9" [f1c91612-b967-447e-bc71-13ba0d11864b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 12:02:20.664631   60146 system_pods.go:89] "storage-provisioner" [8df0a38c-5e91-4b10-a303-c4eff9545669] Running
	I0610 12:02:20.664640   60146 system_pods.go:126] duration metric: took 203.773693ms to wait for k8s-apps to be running ...
	I0610 12:02:20.664649   60146 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:02:20.664690   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:02:20.681388   60146 system_svc.go:56] duration metric: took 16.731528ms WaitForService to wait for kubelet
	I0610 12:02:20.681411   60146 kubeadm.go:576] duration metric: took 3.671630148s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:02:20.681432   60146 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:02:20.861346   60146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:02:20.861369   60146 node_conditions.go:123] node cpu capacity is 2
	I0610 12:02:20.861379   60146 node_conditions.go:105] duration metric: took 179.94199ms to run NodePressure ...
	I0610 12:02:20.861390   60146 start.go:240] waiting for startup goroutines ...
	I0610 12:02:20.861396   60146 start.go:245] waiting for cluster config update ...
	I0610 12:02:20.861405   60146 start.go:254] writing updated cluster config ...
	I0610 12:02:20.861658   60146 ssh_runner.go:195] Run: rm -f paused
	I0610 12:02:20.911134   60146 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 12:02:20.913129   60146 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-281114" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.824244563Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020956824157194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5629285-cb43-4dbd-a1fc-4be5b030fc8e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.825141330Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0aa6c147-b98f-430a-a37a-7dc3b44589fa name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.825216067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0aa6c147-b98f-430a-a37a-7dc3b44589fa name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.825469987Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020182077963619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba048d2e541288c34094ca550643148bb0b678c978c73d61f1d5e05a37221409,PodSandboxId:1ae5f2ccfc7b21cf9a3d8c640b4451a279b94de084c199fc4f85a661935aef90,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718020161756999359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5a24d2e-a638-4a3c-bd49-8c6f5c07b55b,},Annotations:map[string]string{io.kubernetes.container.hash: e1a981bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933,PodSandboxId:923f47493ca157b932694bb125b000a5098d73225de284ba506ace381c9bec54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020158905307912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7dlzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2618cd-b48c-44bd-a07d-4fe4585a14fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2e716d93,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb,PodSandboxId:520c8f4f7df845a87160476ca3b69e4518730eb6fb678f6f7f6c8e6584a15b68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020151236006901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7x2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe1cd055-691f-46b1-a
da7-7dded31d2308,},Annotations:map[string]string{io.kubernetes.container.hash: 26a6f7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718020151226592147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e
0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9,PodSandboxId:332276b6ad39dc96b4106806b7d77b06f1db626468eae1d34cd7c0fb674d5ffc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020147588004348,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165c62b8eb6ccf1956b1ca8d650bbbf1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c,PodSandboxId:9a64ac451ab433068e46583db1b28db0e3920ec45344d20ced406a5a7294fd0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020147577750081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f460092c2c832cd821e0ae3b0d1c7dae,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: fa055ffe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29,PodSandboxId:38e659f103b780fd8f5e98550704fcf98f1361ec0501bcb94ba51dbf158e2b23,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020147605046866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8f26a120a10c36d3480d7e942d748f,},Annotations:map[string]string{io.kubernetes.container.hash:
d59a1a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43,PodSandboxId:44d07e419bbf8db720588bfefe8724f72a30ce268ec55872513035ac188fb1af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020147590822623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4938d9e608e2b1641472107eb959dd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0aa6c147-b98f-430a-a37a-7dc3b44589fa name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.862623543Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36127ba7-134d-4fff-a1d8-71587e841386 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.862753064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36127ba7-134d-4fff-a1d8-71587e841386 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.863881088Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3122b373-631d-4a90-a4e1-89fed0bbbd8f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.864395252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020956864372935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3122b373-631d-4a90-a4e1-89fed0bbbd8f name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.864897979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5052c057-74a5-464d-828e-d79796a5cefa name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.864958301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5052c057-74a5-464d-828e-d79796a5cefa name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.865208512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020182077963619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba048d2e541288c34094ca550643148bb0b678c978c73d61f1d5e05a37221409,PodSandboxId:1ae5f2ccfc7b21cf9a3d8c640b4451a279b94de084c199fc4f85a661935aef90,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718020161756999359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5a24d2e-a638-4a3c-bd49-8c6f5c07b55b,},Annotations:map[string]string{io.kubernetes.container.hash: e1a981bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933,PodSandboxId:923f47493ca157b932694bb125b000a5098d73225de284ba506ace381c9bec54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020158905307912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7dlzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2618cd-b48c-44bd-a07d-4fe4585a14fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2e716d93,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb,PodSandboxId:520c8f4f7df845a87160476ca3b69e4518730eb6fb678f6f7f6c8e6584a15b68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020151236006901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7x2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe1cd055-691f-46b1-a
da7-7dded31d2308,},Annotations:map[string]string{io.kubernetes.container.hash: 26a6f7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718020151226592147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e
0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9,PodSandboxId:332276b6ad39dc96b4106806b7d77b06f1db626468eae1d34cd7c0fb674d5ffc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020147588004348,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165c62b8eb6ccf1956b1ca8d650bbbf1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c,PodSandboxId:9a64ac451ab433068e46583db1b28db0e3920ec45344d20ced406a5a7294fd0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020147577750081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f460092c2c832cd821e0ae3b0d1c7dae,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: fa055ffe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29,PodSandboxId:38e659f103b780fd8f5e98550704fcf98f1361ec0501bcb94ba51dbf158e2b23,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020147605046866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8f26a120a10c36d3480d7e942d748f,},Annotations:map[string]string{io.kubernetes.container.hash:
d59a1a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43,PodSandboxId:44d07e419bbf8db720588bfefe8724f72a30ce268ec55872513035ac188fb1af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020147590822623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4938d9e608e2b1641472107eb959dd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5052c057-74a5-464d-828e-d79796a5cefa name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.905458048Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1b39e63-c352-42e9-8576-5e722a8ad43b name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.905557696Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1b39e63-c352-42e9-8576-5e722a8ad43b name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.906934864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1799e194-59b0-439e-9813-327d78bac149 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.907369426Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020956907347904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1799e194-59b0-439e-9813-327d78bac149 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.908000761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=138ca653-e909-4a5f-9937-a1b02ccb0440 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.908152992Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=138ca653-e909-4a5f-9937-a1b02ccb0440 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.908343642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020182077963619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba048d2e541288c34094ca550643148bb0b678c978c73d61f1d5e05a37221409,PodSandboxId:1ae5f2ccfc7b21cf9a3d8c640b4451a279b94de084c199fc4f85a661935aef90,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718020161756999359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5a24d2e-a638-4a3c-bd49-8c6f5c07b55b,},Annotations:map[string]string{io.kubernetes.container.hash: e1a981bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933,PodSandboxId:923f47493ca157b932694bb125b000a5098d73225de284ba506ace381c9bec54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020158905307912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7dlzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2618cd-b48c-44bd-a07d-4fe4585a14fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2e716d93,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb,PodSandboxId:520c8f4f7df845a87160476ca3b69e4518730eb6fb678f6f7f6c8e6584a15b68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020151236006901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7x2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe1cd055-691f-46b1-a
da7-7dded31d2308,},Annotations:map[string]string{io.kubernetes.container.hash: 26a6f7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718020151226592147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e
0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9,PodSandboxId:332276b6ad39dc96b4106806b7d77b06f1db626468eae1d34cd7c0fb674d5ffc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020147588004348,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165c62b8eb6ccf1956b1ca8d650bbbf1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c,PodSandboxId:9a64ac451ab433068e46583db1b28db0e3920ec45344d20ced406a5a7294fd0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020147577750081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f460092c2c832cd821e0ae3b0d1c7dae,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: fa055ffe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29,PodSandboxId:38e659f103b780fd8f5e98550704fcf98f1361ec0501bcb94ba51dbf158e2b23,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020147605046866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8f26a120a10c36d3480d7e942d748f,},Annotations:map[string]string{io.kubernetes.container.hash:
d59a1a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43,PodSandboxId:44d07e419bbf8db720588bfefe8724f72a30ce268ec55872513035ac188fb1af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020147590822623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4938d9e608e2b1641472107eb959dd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=138ca653-e909-4a5f-9937-a1b02ccb0440 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.943830773Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67f93ac0-a2b7-4107-af44-c065dd3836c9 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.943934247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67f93ac0-a2b7-4107-af44-c065dd3836c9 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.944999236Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f550f03b-b9ab-4971-9c4f-0f8f1b03bd08 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.945524047Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020956945502460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f550f03b-b9ab-4971-9c4f-0f8f1b03bd08 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.946152482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e994d202-13d9-4bfe-a7e3-f8666c84caa0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.946222255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e994d202-13d9-4bfe-a7e3-f8666c84caa0 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:36 embed-certs-832735 crio[734]: time="2024-06-10 12:02:36.946574012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020182077963619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba048d2e541288c34094ca550643148bb0b678c978c73d61f1d5e05a37221409,PodSandboxId:1ae5f2ccfc7b21cf9a3d8c640b4451a279b94de084c199fc4f85a661935aef90,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718020161756999359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5a24d2e-a638-4a3c-bd49-8c6f5c07b55b,},Annotations:map[string]string{io.kubernetes.container.hash: e1a981bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933,PodSandboxId:923f47493ca157b932694bb125b000a5098d73225de284ba506ace381c9bec54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020158905307912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7dlzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2618cd-b48c-44bd-a07d-4fe4585a14fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2e716d93,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb,PodSandboxId:520c8f4f7df845a87160476ca3b69e4518730eb6fb678f6f7f6c8e6584a15b68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020151236006901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7x2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe1cd055-691f-46b1-a
da7-7dded31d2308,},Annotations:map[string]string{io.kubernetes.container.hash: 26a6f7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718020151226592147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e
0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9,PodSandboxId:332276b6ad39dc96b4106806b7d77b06f1db626468eae1d34cd7c0fb674d5ffc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020147588004348,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165c62b8eb6ccf1956b1ca8d650bbbf1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c,PodSandboxId:9a64ac451ab433068e46583db1b28db0e3920ec45344d20ced406a5a7294fd0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020147577750081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f460092c2c832cd821e0ae3b0d1c7dae,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: fa055ffe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29,PodSandboxId:38e659f103b780fd8f5e98550704fcf98f1361ec0501bcb94ba51dbf158e2b23,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020147605046866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8f26a120a10c36d3480d7e942d748f,},Annotations:map[string]string{io.kubernetes.container.hash:
d59a1a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43,PodSandboxId:44d07e419bbf8db720588bfefe8724f72a30ce268ec55872513035ac188fb1af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020147590822623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4938d9e608e2b1641472107eb959dd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e994d202-13d9-4bfe-a7e3-f8666c84caa0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5509696f5a811       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   6426cdc85c4e0       storage-provisioner
	ba048d2e54128       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   1ae5f2ccfc7b2       busybox
	04ef0964178ae       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   923f47493ca15       coredns-7db6d8ff4d-7dlzb
	3c7292ccdd40d       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      13 minutes ago      Running             kube-proxy                1                   520c8f4f7df84       kube-proxy-b7x2p
	8d8bc4b6855e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   6426cdc85c4e0       storage-provisioner
	61727f8f43e1d       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      13 minutes ago      Running             kube-apiserver            1                   38e659f103b78       kube-apiserver-embed-certs-832735
	7badb7b66c71f       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      13 minutes ago      Running             kube-controller-manager   1                   44d07e419bbf8       kube-controller-manager-embed-certs-832735
	7afbab9bcf1ac       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      13 minutes ago      Running             kube-scheduler            1                   332276b6ad39d       kube-scheduler-embed-certs-832735
	0c16d9960d9ab       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago      Running             etcd                      1                   9a64ac451ab43       etcd-embed-certs-832735
	
	
	==> coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33457 - 57325 "HINFO IN 4448557384152593783.2575088353663232798. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015730222s
	
	
	==> describe nodes <==
	Name:               embed-certs-832735
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-832735
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=embed-certs-832735
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T11_39_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:39:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-832735
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:02:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:59:52 +0000   Mon, 10 Jun 2024 11:39:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:59:52 +0000   Mon, 10 Jun 2024 11:39:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:59:52 +0000   Mon, 10 Jun 2024 11:39:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:59:52 +0000   Mon, 10 Jun 2024 11:49:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.19
	  Hostname:    embed-certs-832735
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 afe682491c8144db9ef90386aaf4c58e
	  System UUID:                afe68249-1c81-44db-9ef9-0386aaf4c58e
	  Boot ID:                    8914484a-56f6-42c0-b4ac-3c6b90f63b0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-7dlzb                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-embed-certs-832735                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-832735             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-832735    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-b7x2p                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-embed-certs-832735             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-569cc877fc-5zg8j               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-832735 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-832735 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-832735 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node embed-certs-832735 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node embed-certs-832735 event: Registered Node embed-certs-832735 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-832735 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-832735 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-832735 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-832735 event: Registered Node embed-certs-832735 in Controller
	
	
	==> dmesg <==
	[Jun10 11:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062373] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050432] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.034773] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.853603] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.376409] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.597653] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.061973] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056388] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.156545] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.135766] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.280231] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[Jun10 11:49] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[  +2.489149] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.064187] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.524571] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.479242] systemd-fstab-generator[1552]: Ignoring "noauto" option for root device
	[  +3.232159] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.684754] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] <==
	{"level":"info","ts":"2024-06-10T11:49:08.999159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d18a13a55fd66152 is starting a new election at term 2"}
	{"level":"info","ts":"2024-06-10T11:49:08.999206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d18a13a55fd66152 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-06-10T11:49:08.999241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d18a13a55fd66152 received MsgPreVoteResp from d18a13a55fd66152 at term 2"}
	{"level":"info","ts":"2024-06-10T11:49:08.999253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d18a13a55fd66152 became candidate at term 3"}
	{"level":"info","ts":"2024-06-10T11:49:08.999259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d18a13a55fd66152 received MsgVoteResp from d18a13a55fd66152 at term 3"}
	{"level":"info","ts":"2024-06-10T11:49:08.999267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d18a13a55fd66152 became leader at term 3"}
	{"level":"info","ts":"2024-06-10T11:49:08.999276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d18a13a55fd66152 elected leader d18a13a55fd66152 at term 3"}
	{"level":"info","ts":"2024-06-10T11:49:09.008338Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d18a13a55fd66152","local-member-attributes":"{Name:embed-certs-832735 ClientURLs:[https://192.168.61.19:2379]}","request-path":"/0/members/d18a13a55fd66152/attributes","cluster-id":"5cb6c2b3fa543b56","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T11:49:09.008465Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:49:09.008533Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:49:09.011097Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T11:49:09.011165Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T11:49:09.016848Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.19:2379"}
	{"level":"info","ts":"2024-06-10T11:49:09.026431Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T11:57:04.196339Z","caller":"traceutil/trace.go:171","msg":"trace[490671863] transaction","detail":"{read_only:false; response_revision:953; number_of_response:1; }","duration":"577.007146ms","start":"2024-06-10T11:57:03.61928Z","end":"2024-06-10T11:57:04.196287Z","steps":["trace[490671863] 'process raft request'  (duration: 576.509689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T11:57:04.19777Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T11:57:03.619255Z","time spent":"577.783784ms","remote":"127.0.0.1:49968","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:952 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-06-10T11:57:04.19816Z","caller":"traceutil/trace.go:171","msg":"trace[2102256102] linearizableReadLoop","detail":"{readStateIndex:1086; appliedIndex:1085; }","duration":"457.455868ms","start":"2024-06-10T11:57:03.738465Z","end":"2024-06-10T11:57:04.195921Z","steps":["trace[2102256102] 'read index received'  (duration: 457.268418ms)","trace[2102256102] 'applied index is now lower than readState.Index'  (duration: 186.724µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T11:57:04.198294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"459.837238ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T11:57:04.198443Z","caller":"traceutil/trace.go:171","msg":"trace[1450497661] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:953; }","duration":"460.007293ms","start":"2024-06-10T11:57:03.73842Z","end":"2024-06-10T11:57:04.198427Z","steps":["trace[1450497661] 'agreement among raft nodes before linearized reading'  (duration: 459.835419ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T11:57:04.198497Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T11:57:03.738407Z","time spent":"460.08014ms","remote":"127.0.0.1:50000","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-06-10T11:57:04.198824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.389096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T11:57:04.199139Z","caller":"traceutil/trace.go:171","msg":"trace[993035526] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:953; }","duration":"264.722516ms","start":"2024-06-10T11:57:03.934408Z","end":"2024-06-10T11:57:04.19913Z","steps":["trace[993035526] 'agreement among raft nodes before linearized reading'  (duration: 264.389842ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:59:09.07328Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":810}
	{"level":"info","ts":"2024-06-10T11:59:09.083576Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":810,"took":"9.937223ms","hash":311689356,"current-db-size-bytes":2564096,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2564096,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-06-10T11:59:09.083633Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":311689356,"revision":810,"compact-revision":-1}
	
	
	==> kernel <==
	 12:02:37 up 13 min,  0 users,  load average: 0.01, 0.05, 0.02
	Linux embed-certs-832735 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] <==
	I0610 11:57:11.424499       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 11:59:10.424801       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 11:59:10.425167       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0610 11:59:11.426218       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 11:59:11.426315       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 11:59:11.426346       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 11:59:11.426417       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 11:59:11.426476       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 11:59:11.427614       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:00:11.427350       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:00:11.427424       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:00:11.427434       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:00:11.428518       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:00:11.428651       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:00:11.428680       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:02:11.427611       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:02:11.427973       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:02:11.428009       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:02:11.429846       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:02:11.429927       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:02:11.429935       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] <==
	I0610 11:56:54.334766       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 11:57:23.887358       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 11:57:24.342197       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 11:57:53.892438       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 11:57:54.349386       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 11:58:23.897876       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 11:58:24.358969       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 11:58:53.902307       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 11:58:54.367030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 11:59:23.909135       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 11:59:24.374534       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 11:59:53.915121       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 11:59:54.382424       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0610 12:00:17.884526       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="491.692µs"
	E0610 12:00:23.920366       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:00:24.392364       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0610 12:00:32.888990       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="104.951µs"
	E0610 12:00:53.925004       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:00:54.400474       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:01:23.934669       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:01:24.408729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:01:53.939612       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:01:54.416250       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:02:23.945434       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:02:24.423670       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] <==
	I0610 11:49:11.465131       1 server_linux.go:69] "Using iptables proxy"
	I0610 11:49:11.477896       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.19"]
	I0610 11:49:11.531590       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 11:49:11.531641       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 11:49:11.531658       1 server_linux.go:165] "Using iptables Proxier"
	I0610 11:49:11.534364       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 11:49:11.534893       1 server.go:872] "Version info" version="v1.30.1"
	I0610 11:49:11.534924       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:49:11.538387       1 config.go:192] "Starting service config controller"
	I0610 11:49:11.538458       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 11:49:11.538539       1 config.go:101] "Starting endpoint slice config controller"
	I0610 11:49:11.538581       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 11:49:11.540749       1 config.go:319] "Starting node config controller"
	I0610 11:49:11.540777       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 11:49:11.639723       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 11:49:11.639808       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:49:11.641925       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] <==
	I0610 11:49:08.891370       1 serving.go:380] Generated self-signed cert in-memory
	W0610 11:49:10.379333       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 11:49:10.379413       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 11:49:10.379424       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 11:49:10.379430       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 11:49:10.414577       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 11:49:10.414625       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:49:10.418276       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 11:49:10.418361       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 11:49:10.418381       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 11:49:10.418397       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 11:49:10.519265       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 12:00:05 embed-certs-832735 kubelet[947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:00:05 embed-certs-832735 kubelet[947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:00:05 embed-certs-832735 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:00:05 embed-certs-832735 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:00:17 embed-certs-832735 kubelet[947]: E0610 12:00:17.867871     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:00:32 embed-certs-832735 kubelet[947]: E0610 12:00:32.866236     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:00:46 embed-certs-832735 kubelet[947]: E0610 12:00:46.867551     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:01:00 embed-certs-832735 kubelet[947]: E0610 12:01:00.865507     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:01:05 embed-certs-832735 kubelet[947]: E0610 12:01:05.885492     947 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:01:05 embed-certs-832735 kubelet[947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:01:05 embed-certs-832735 kubelet[947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:01:05 embed-certs-832735 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:01:05 embed-certs-832735 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:01:15 embed-certs-832735 kubelet[947]: E0610 12:01:15.867839     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:01:26 embed-certs-832735 kubelet[947]: E0610 12:01:26.867914     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:01:40 embed-certs-832735 kubelet[947]: E0610 12:01:40.866220     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:01:52 embed-certs-832735 kubelet[947]: E0610 12:01:52.866411     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:02:05 embed-certs-832735 kubelet[947]: E0610 12:02:05.885946     947 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:02:05 embed-certs-832735 kubelet[947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:02:05 embed-certs-832735 kubelet[947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:02:05 embed-certs-832735 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:02:05 embed-certs-832735 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:02:06 embed-certs-832735 kubelet[947]: E0610 12:02:06.866644     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:02:19 embed-certs-832735 kubelet[947]: E0610 12:02:19.867370     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:02:30 embed-certs-832735 kubelet[947]: E0610 12:02:30.865892     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	
	
	==> storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] <==
	I0610 11:49:42.166129       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 11:49:42.177420       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 11:49:42.177530       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 11:49:59.576289       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 11:49:59.576460       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-832735_580a0a61-44ec-48ce-9195-bda17322e0ce!
	I0610 11:49:59.578291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78273e0e-e224-448f-8e85-7cd63396fc44", APIVersion:"v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-832735_580a0a61-44ec-48ce-9195-bda17322e0ce became leader
	I0610 11:49:59.676873       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-832735_580a0a61-44ec-48ce-9195-bda17322e0ce!
	
	
	==> storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] <==
	I0610 11:49:11.401301       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0610 11:49:41.411619       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832735 -n embed-certs-832735
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-832735 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5zg8j
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-832735 describe pod metrics-server-569cc877fc-5zg8j
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-832735 describe pod metrics-server-569cc877fc-5zg8j: exit status 1 (60.713172ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5zg8j" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-832735 describe pod metrics-server-569cc877fc-5zg8j: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0610 11:54:12.453226   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298179 -n no-preload-298179
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-10 12:02:47.139560315 +0000 UTC m=+6120.225591809
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298179 -n no-preload-298179
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-298179 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-298179 logs -n 25: (1.313960027s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-324836                              | cert-expiration-324836       | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-036579 | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	|         | disable-driver-mounts-036579                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-832735            | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC | 10 Jun 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	| addons  | enable metrics-server -p no-preload-298179             | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-832735                 | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-166693        | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-298179                  | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:49 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-166693             | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281114  | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC | 10 Jun 24 11:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC |                     |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281114       | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC | 10 Jun 24 12:02 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 11:51:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 11:51:53.675460   60146 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:51:53.675676   60146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:51:53.675684   60146 out.go:304] Setting ErrFile to fd 2...
	I0610 11:51:53.675688   60146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:51:53.675848   60146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:51:53.676386   60146 out.go:298] Setting JSON to false
	I0610 11:51:53.677403   60146 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5655,"bootTime":1718014659,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:51:53.677465   60146 start.go:139] virtualization: kvm guest
	I0610 11:51:53.679851   60146 out.go:177] * [default-k8s-diff-port-281114] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:51:53.681209   60146 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:51:53.682492   60146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:51:53.681162   60146 notify.go:220] Checking for updates...
	I0610 11:51:53.683939   60146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:51:53.685202   60146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:51:53.686363   60146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:51:53.687770   60146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:51:53.689668   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:51:53.690093   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.690167   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.705134   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0610 11:51:53.705647   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.706289   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.706314   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.706603   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.706788   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.707058   60146 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:51:53.707411   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.707451   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.722927   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0610 11:51:53.723433   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.723927   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.723953   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.724482   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.724651   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.763209   60146 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 11:51:53.764436   60146 start.go:297] selected driver: kvm2
	I0610 11:51:53.764446   60146 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:51:53.764537   60146 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:51:53.765172   60146 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:51:53.765257   60146 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 11:51:53.782641   60146 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 11:51:53.783044   60146 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:51:53.783099   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:51:53.783109   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:51:53.783152   60146 start.go:340] cluster config:
	{Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:51:53.783254   60146 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:51:53.786018   60146 out.go:177] * Starting "default-k8s-diff-port-281114" primary control-plane node in "default-k8s-diff-port-281114" cluster
	I0610 11:51:53.787303   60146 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:51:53.787344   60146 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 11:51:53.787357   60146 cache.go:56] Caching tarball of preloaded images
	I0610 11:51:53.787439   60146 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:51:53.787455   60146 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 11:51:53.787569   60146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/config.json ...
	I0610 11:51:53.787799   60146 start.go:360] acquireMachinesLock for default-k8s-diff-port-281114: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:51:53.787855   60146 start.go:364] duration metric: took 30.27µs to acquireMachinesLock for "default-k8s-diff-port-281114"
	I0610 11:51:53.787875   60146 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:51:53.787881   60146 fix.go:54] fixHost starting: 
	I0610 11:51:53.788131   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.788165   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.805744   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0610 11:51:53.806279   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.806909   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.806936   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.807346   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.807532   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.807718   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 11:51:53.809469   60146 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281114: state=Running err=<nil>
	W0610 11:51:53.809507   60146 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:51:53.811518   60146 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-281114" VM ...
	I0610 11:51:50.691535   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:52.691588   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:54.692007   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:54.248038   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:54.261302   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:54.261375   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:54.293194   57945 cri.go:89] found id: ""
	I0610 11:51:54.293228   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.293240   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:54.293247   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:54.293307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:54.326656   57945 cri.go:89] found id: ""
	I0610 11:51:54.326687   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.326699   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:54.326707   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:54.326764   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:54.359330   57945 cri.go:89] found id: ""
	I0610 11:51:54.359365   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.359378   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:54.359386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:54.359450   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:54.391520   57945 cri.go:89] found id: ""
	I0610 11:51:54.391549   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.391558   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:54.391565   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:54.391642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:54.426803   57945 cri.go:89] found id: ""
	I0610 11:51:54.426840   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.426850   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:54.426860   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:54.426936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:54.462618   57945 cri.go:89] found id: ""
	I0610 11:51:54.462645   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.462653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:54.462659   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:54.462728   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:54.494599   57945 cri.go:89] found id: ""
	I0610 11:51:54.494631   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.494642   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:54.494650   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:54.494701   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:54.528236   57945 cri.go:89] found id: ""
	I0610 11:51:54.528265   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.528280   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:54.528290   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:54.528305   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:54.579562   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:54.579604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:54.592871   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:54.592899   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:54.661928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:54.661950   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:54.661984   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:54.741578   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:54.741611   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:53.939312   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:55.940181   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:53.812752   60146 machine.go:94] provisionDockerMachine start ...
	I0610 11:51:53.812779   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.813001   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:51:53.815580   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:51:53.815981   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:47:50 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:51:53.816013   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:51:53.816111   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:51:53.816288   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:51:53.816435   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:51:53.816577   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:51:53.816743   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:51:53.817141   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:51:53.817157   60146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:51:56.705435   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:51:56.692515   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:59.192511   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:57.283397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:57.296631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:57.296704   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:57.328185   57945 cri.go:89] found id: ""
	I0610 11:51:57.328217   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.328228   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:57.328237   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:57.328302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:57.360137   57945 cri.go:89] found id: ""
	I0610 11:51:57.360163   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.360173   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:57.360188   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:57.360244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:57.395638   57945 cri.go:89] found id: ""
	I0610 11:51:57.395680   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.395691   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:57.395700   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:57.395765   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:57.429024   57945 cri.go:89] found id: ""
	I0610 11:51:57.429051   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.429062   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:57.429070   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:57.429132   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:57.461726   57945 cri.go:89] found id: ""
	I0610 11:51:57.461757   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.461767   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:57.461773   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:57.461838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:57.495055   57945 cri.go:89] found id: ""
	I0610 11:51:57.495078   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.495086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:57.495092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:57.495138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:57.526495   57945 cri.go:89] found id: ""
	I0610 11:51:57.526521   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.526530   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:57.526536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:57.526598   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:57.559160   57945 cri.go:89] found id: ""
	I0610 11:51:57.559181   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.559189   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:57.559197   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:57.559212   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:57.593801   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:57.593827   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:57.641074   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:57.641106   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:57.654097   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:57.654124   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:57.726137   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:57.726160   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:57.726176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:00.302303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:00.314500   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:00.314560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:00.345865   57945 cri.go:89] found id: ""
	I0610 11:52:00.345889   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.345897   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:00.345902   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:00.345946   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:00.377383   57945 cri.go:89] found id: ""
	I0610 11:52:00.377405   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.377412   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:00.377417   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:00.377482   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:00.408667   57945 cri.go:89] found id: ""
	I0610 11:52:00.408694   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.408701   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:00.408706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:00.408755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:00.444349   57945 cri.go:89] found id: ""
	I0610 11:52:00.444379   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.444390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:00.444397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:00.444455   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:00.477886   57945 cri.go:89] found id: ""
	I0610 11:52:00.477910   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.477918   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:00.477924   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:00.477982   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:00.508996   57945 cri.go:89] found id: ""
	I0610 11:52:00.509023   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.509030   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:00.509036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:00.509097   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:00.541548   57945 cri.go:89] found id: ""
	I0610 11:52:00.541572   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.541580   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:00.541585   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:00.541642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:00.574507   57945 cri.go:89] found id: ""
	I0610 11:52:00.574534   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.574541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:00.574550   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:00.574565   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:00.610838   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:00.610862   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:00.661155   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:00.661197   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:00.674122   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:00.674154   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:00.745943   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:00.745976   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:00.745993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:58.439245   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:00.441145   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:59.777253   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:01.691833   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:04.193279   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:03.325365   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:03.337955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:03.338042   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:03.370767   57945 cri.go:89] found id: ""
	I0610 11:52:03.370798   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.370810   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:03.370818   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:03.370903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:03.402587   57945 cri.go:89] found id: ""
	I0610 11:52:03.402616   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.402623   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:03.402628   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:03.402684   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:03.436751   57945 cri.go:89] found id: ""
	I0610 11:52:03.436778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.436788   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:03.436795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:03.436854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:03.467745   57945 cri.go:89] found id: ""
	I0610 11:52:03.467778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.467788   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:03.467798   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:03.467865   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:03.499321   57945 cri.go:89] found id: ""
	I0610 11:52:03.499347   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.499355   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:03.499361   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:03.499419   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:03.534209   57945 cri.go:89] found id: ""
	I0610 11:52:03.534242   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.534253   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:03.534261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:03.534318   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:03.567837   57945 cri.go:89] found id: ""
	I0610 11:52:03.567871   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.567882   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:03.567889   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:03.567954   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:03.604223   57945 cri.go:89] found id: ""
	I0610 11:52:03.604249   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.604258   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:03.604266   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:03.604280   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:03.659716   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:03.659751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:03.673389   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:03.673425   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:03.746076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:03.746104   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:03.746118   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:03.825803   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:03.825837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.362151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:06.375320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:06.375394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:06.409805   57945 cri.go:89] found id: ""
	I0610 11:52:06.409840   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.409851   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:06.409859   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:06.409914   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:06.447126   57945 cri.go:89] found id: ""
	I0610 11:52:06.447157   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.447167   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:06.447174   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:06.447237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:06.479443   57945 cri.go:89] found id: ""
	I0610 11:52:06.479472   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.479483   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:06.479489   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:06.479546   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:06.511107   57945 cri.go:89] found id: ""
	I0610 11:52:06.511137   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.511148   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:06.511163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:06.511223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:06.542727   57945 cri.go:89] found id: ""
	I0610 11:52:06.542753   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.542761   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:06.542767   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:06.542812   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:06.582141   57945 cri.go:89] found id: ""
	I0610 11:52:06.582166   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.582174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:06.582180   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:06.582239   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:06.615203   57945 cri.go:89] found id: ""
	I0610 11:52:06.615230   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.615240   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:06.615248   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:06.615314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:06.650286   57945 cri.go:89] found id: ""
	I0610 11:52:06.650310   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.650317   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:06.650326   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:06.650338   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:06.721601   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:06.721631   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:06.721646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:06.794645   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:06.794679   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.830598   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:06.830628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:06.880740   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:06.880786   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:02.939105   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:04.939366   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:07.439715   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:05.861224   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:06.691130   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:09.191608   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:09.394202   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:09.409822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:09.409898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:09.451573   57945 cri.go:89] found id: ""
	I0610 11:52:09.451597   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.451605   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:09.451611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:09.451663   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:09.491039   57945 cri.go:89] found id: ""
	I0610 11:52:09.491069   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.491080   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:09.491087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:09.491147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:09.522023   57945 cri.go:89] found id: ""
	I0610 11:52:09.522050   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.522058   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:09.522063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:09.522108   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:09.554014   57945 cri.go:89] found id: ""
	I0610 11:52:09.554040   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.554048   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:09.554057   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:09.554127   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:09.586285   57945 cri.go:89] found id: ""
	I0610 11:52:09.586318   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.586328   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:09.586336   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:09.586396   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:09.618362   57945 cri.go:89] found id: ""
	I0610 11:52:09.618391   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.618401   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:09.618408   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:09.618465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:09.651067   57945 cri.go:89] found id: ""
	I0610 11:52:09.651097   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.651108   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:09.651116   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:09.651174   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:09.682764   57945 cri.go:89] found id: ""
	I0610 11:52:09.682792   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.682799   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:09.682807   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:09.682819   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:09.755071   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:09.755096   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:09.755109   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:09.833635   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:09.833672   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:09.869744   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:09.869777   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:09.924045   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:09.924079   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:09.440296   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:11.939025   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:08.929184   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:11.691213   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:13.693439   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:12.438029   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:12.452003   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:12.452070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:12.485680   57945 cri.go:89] found id: ""
	I0610 11:52:12.485711   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.485719   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:12.485725   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:12.485773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:12.519200   57945 cri.go:89] found id: ""
	I0610 11:52:12.519227   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.519238   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:12.519245   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:12.519317   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:12.553154   57945 cri.go:89] found id: ""
	I0610 11:52:12.553179   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.553185   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:12.553191   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:12.553237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:12.584499   57945 cri.go:89] found id: ""
	I0610 11:52:12.584543   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.584555   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:12.584564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:12.584619   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:12.619051   57945 cri.go:89] found id: ""
	I0610 11:52:12.619079   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.619094   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:12.619102   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:12.619165   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:12.653652   57945 cri.go:89] found id: ""
	I0610 11:52:12.653690   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.653702   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:12.653710   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:12.653773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:12.685887   57945 cri.go:89] found id: ""
	I0610 11:52:12.685919   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.685930   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:12.685938   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:12.685997   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:12.719534   57945 cri.go:89] found id: ""
	I0610 11:52:12.719567   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.719578   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:12.719591   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:12.719603   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:12.770689   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:12.770725   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:12.783574   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:12.783604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:12.855492   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:12.855518   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:12.855529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:12.928993   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:12.929037   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:15.487670   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:15.501367   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:15.501437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:15.534205   57945 cri.go:89] found id: ""
	I0610 11:52:15.534248   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.534256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:15.534262   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:15.534315   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:15.570972   57945 cri.go:89] found id: ""
	I0610 11:52:15.571001   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.571008   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:15.571013   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:15.571073   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:15.604233   57945 cri.go:89] found id: ""
	I0610 11:52:15.604258   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.604267   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:15.604273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:15.604328   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:15.637119   57945 cri.go:89] found id: ""
	I0610 11:52:15.637150   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.637159   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:15.637167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:15.637226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:15.670548   57945 cri.go:89] found id: ""
	I0610 11:52:15.670572   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.670580   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:15.670586   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:15.670644   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:15.706374   57945 cri.go:89] found id: ""
	I0610 11:52:15.706398   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.706406   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:15.706412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:15.706457   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:15.742828   57945 cri.go:89] found id: ""
	I0610 11:52:15.742852   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.742859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:15.742865   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:15.742910   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:15.773783   57945 cri.go:89] found id: ""
	I0610 11:52:15.773811   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.773818   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:15.773825   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:15.773835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:15.828725   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:15.828764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:15.842653   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:15.842682   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:15.919771   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:15.919794   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:15.919809   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:15.994439   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:15.994478   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:13.943213   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:16.439647   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:15.009211   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:18.081244   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:16.191615   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:18.191760   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:18.532040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:18.544800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:18.544893   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:18.579148   57945 cri.go:89] found id: ""
	I0610 11:52:18.579172   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.579180   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:18.579186   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:18.579236   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:18.613005   57945 cri.go:89] found id: ""
	I0610 11:52:18.613028   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.613035   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:18.613042   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:18.613094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:18.648843   57945 cri.go:89] found id: ""
	I0610 11:52:18.648870   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.648878   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:18.648883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:18.648939   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:18.678943   57945 cri.go:89] found id: ""
	I0610 11:52:18.678974   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.679014   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:18.679022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:18.679082   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:18.728485   57945 cri.go:89] found id: ""
	I0610 11:52:18.728516   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.728527   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:18.728535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:18.728605   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:18.764320   57945 cri.go:89] found id: ""
	I0610 11:52:18.764352   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.764363   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:18.764370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:18.764431   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:18.797326   57945 cri.go:89] found id: ""
	I0610 11:52:18.797358   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.797369   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:18.797377   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:18.797440   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:18.832517   57945 cri.go:89] found id: ""
	I0610 11:52:18.832552   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.832563   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:18.832574   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:18.832588   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:18.845158   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:18.845192   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:18.915928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:18.915959   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:18.915974   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:18.990583   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:18.990625   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:19.029044   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:19.029069   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.582973   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:21.596373   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:21.596453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:21.633497   57945 cri.go:89] found id: ""
	I0610 11:52:21.633528   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.633538   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:21.633546   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:21.633631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:21.663999   57945 cri.go:89] found id: ""
	I0610 11:52:21.664055   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.664069   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:21.664078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:21.664138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:21.698105   57945 cri.go:89] found id: ""
	I0610 11:52:21.698136   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.698147   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:21.698155   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:21.698213   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:21.730036   57945 cri.go:89] found id: ""
	I0610 11:52:21.730061   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.730068   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:21.730074   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:21.730119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:21.764484   57945 cri.go:89] found id: ""
	I0610 11:52:21.764507   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.764515   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:21.764520   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:21.764575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:21.797366   57945 cri.go:89] found id: ""
	I0610 11:52:21.797397   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.797408   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:21.797415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:21.797478   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:21.832991   57945 cri.go:89] found id: ""
	I0610 11:52:21.833023   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.833030   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:21.833035   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:21.833081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:21.868859   57945 cri.go:89] found id: ""
	I0610 11:52:21.868890   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.868899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:21.868924   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:21.868937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.918976   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:21.919013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:21.934602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:21.934629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:22.002888   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:22.002909   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:22.002920   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:22.082894   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:22.082941   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:18.439853   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:20.942040   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:20.692398   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:23.191532   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:24.620683   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:24.634200   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:24.634280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:24.667181   57945 cri.go:89] found id: ""
	I0610 11:52:24.667209   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.667217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:24.667222   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:24.667277   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:24.702114   57945 cri.go:89] found id: ""
	I0610 11:52:24.702142   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.702151   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:24.702158   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:24.702220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:24.734464   57945 cri.go:89] found id: ""
	I0610 11:52:24.734488   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.734497   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:24.734502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:24.734565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:24.767074   57945 cri.go:89] found id: ""
	I0610 11:52:24.767124   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.767132   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:24.767138   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:24.767210   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:24.800328   57945 cri.go:89] found id: ""
	I0610 11:52:24.800358   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.800369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:24.800376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:24.800442   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:24.837785   57945 cri.go:89] found id: ""
	I0610 11:52:24.837814   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.837822   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:24.837828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:24.837878   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:24.874886   57945 cri.go:89] found id: ""
	I0610 11:52:24.874910   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.874917   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:24.874923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:24.874968   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:24.912191   57945 cri.go:89] found id: ""
	I0610 11:52:24.912217   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.912235   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:24.912247   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:24.912265   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:24.968229   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:24.968262   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:24.981018   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:24.981048   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:25.049879   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:25.049907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:25.049922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:25.135103   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:25.135156   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:23.440293   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:25.939540   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.201186   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:25.691136   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.691669   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.687667   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:27.700418   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:27.700486   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:27.733712   57945 cri.go:89] found id: ""
	I0610 11:52:27.733740   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.733749   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:27.733754   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:27.733839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:27.774063   57945 cri.go:89] found id: ""
	I0610 11:52:27.774089   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.774100   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:27.774108   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:27.774169   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:27.813906   57945 cri.go:89] found id: ""
	I0610 11:52:27.813945   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.813956   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:27.813963   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:27.814031   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:27.845877   57945 cri.go:89] found id: ""
	I0610 11:52:27.845901   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.845909   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:27.845915   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:27.845961   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:27.880094   57945 cri.go:89] found id: ""
	I0610 11:52:27.880139   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.880148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:27.880153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:27.880206   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:27.914308   57945 cri.go:89] found id: ""
	I0610 11:52:27.914332   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.914342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:27.914355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:27.914420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:27.949386   57945 cri.go:89] found id: ""
	I0610 11:52:27.949412   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.949423   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:27.949430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:27.949490   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:27.983901   57945 cri.go:89] found id: ""
	I0610 11:52:27.983927   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.983938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:27.983948   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:27.983963   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:28.032820   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:28.032853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:28.046306   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:28.046332   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:28.120614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:28.120642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:28.120657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:28.202182   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:28.202217   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:30.741274   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:30.754276   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:30.754358   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:30.789142   57945 cri.go:89] found id: ""
	I0610 11:52:30.789174   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.789185   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:30.789193   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:30.789255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:30.822319   57945 cri.go:89] found id: ""
	I0610 11:52:30.822350   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.822362   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:30.822369   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:30.822428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:30.853166   57945 cri.go:89] found id: ""
	I0610 11:52:30.853192   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.853199   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:30.853204   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:30.853271   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:30.892290   57945 cri.go:89] found id: ""
	I0610 11:52:30.892320   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.892331   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:30.892339   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:30.892401   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:30.938603   57945 cri.go:89] found id: ""
	I0610 11:52:30.938629   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.938639   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:30.938646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:30.938703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:30.994532   57945 cri.go:89] found id: ""
	I0610 11:52:30.994567   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.994583   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:30.994589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:30.994649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:31.041818   57945 cri.go:89] found id: ""
	I0610 11:52:31.041847   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.041859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:31.041867   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:31.041923   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:31.079897   57945 cri.go:89] found id: ""
	I0610 11:52:31.079927   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.079938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:31.079951   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:31.079967   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:31.092291   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:31.092321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:31.163921   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:31.163943   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:31.163955   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:31.242247   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:31.242287   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:31.281257   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:31.281286   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:27.940743   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:30.440529   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:30.273256   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:30.192386   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:32.192470   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:34.691408   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:33.837783   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:33.851085   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:33.851164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:33.885285   57945 cri.go:89] found id: ""
	I0610 11:52:33.885314   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.885324   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:33.885332   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:33.885391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:33.924958   57945 cri.go:89] found id: ""
	I0610 11:52:33.924996   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.925006   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:33.925022   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:33.925083   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:33.958563   57945 cri.go:89] found id: ""
	I0610 11:52:33.958589   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.958598   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:33.958606   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:33.958665   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:33.991575   57945 cri.go:89] found id: ""
	I0610 11:52:33.991606   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.991616   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:33.991624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:33.991693   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:34.029700   57945 cri.go:89] found id: ""
	I0610 11:52:34.029729   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.029740   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:34.029748   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:34.029805   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:34.068148   57945 cri.go:89] found id: ""
	I0610 11:52:34.068183   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.068194   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:34.068201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:34.068275   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:34.100735   57945 cri.go:89] found id: ""
	I0610 11:52:34.100760   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.100767   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:34.100772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:34.100817   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:34.132898   57945 cri.go:89] found id: ""
	I0610 11:52:34.132927   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.132937   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:34.132958   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:34.132972   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:34.184690   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:34.184723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:34.199604   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:34.199641   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:34.270744   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:34.270763   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:34.270775   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:34.352291   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:34.352334   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:36.894188   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:36.914098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:36.914158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:36.957378   57945 cri.go:89] found id: ""
	I0610 11:52:36.957408   57945 logs.go:276] 0 containers: []
	W0610 11:52:36.957419   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:36.957427   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:36.957498   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:37.003576   57945 cri.go:89] found id: ""
	I0610 11:52:37.003602   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.003611   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:37.003618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:37.003677   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:37.040221   57945 cri.go:89] found id: ""
	I0610 11:52:37.040245   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.040253   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:37.040259   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:37.040307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:37.078151   57945 cri.go:89] found id: ""
	I0610 11:52:37.078185   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.078195   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:37.078202   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:37.078261   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:37.117446   57945 cri.go:89] found id: ""
	I0610 11:52:37.117468   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.117476   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:37.117482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:37.117548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:37.155320   57945 cri.go:89] found id: ""
	I0610 11:52:37.155344   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.155356   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:37.155364   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:37.155414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:37.192194   57945 cri.go:89] found id: ""
	I0610 11:52:37.192221   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.192230   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:37.192238   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:37.192303   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:37.225567   57945 cri.go:89] found id: ""
	I0610 11:52:37.225594   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.225605   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:37.225616   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:37.225632   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:37.240139   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:37.240164   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:52:32.940571   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:34.940672   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:37.440898   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:36.353199   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:36.697419   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:39.190952   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	W0610 11:52:37.307754   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:37.307784   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:37.307801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:37.385929   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:37.385964   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:37.424991   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:37.425029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:39.974839   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:39.988788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:39.988858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:40.025922   57945 cri.go:89] found id: ""
	I0610 11:52:40.025947   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.025954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:40.025967   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:40.026026   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:40.062043   57945 cri.go:89] found id: ""
	I0610 11:52:40.062076   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.062085   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:40.062094   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:40.062158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:40.095441   57945 cri.go:89] found id: ""
	I0610 11:52:40.095465   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.095472   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:40.095478   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:40.095529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:40.127633   57945 cri.go:89] found id: ""
	I0610 11:52:40.127662   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.127672   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:40.127680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:40.127740   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:40.161232   57945 cri.go:89] found id: ""
	I0610 11:52:40.161257   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.161267   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:40.161274   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:40.161334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:40.194491   57945 cri.go:89] found id: ""
	I0610 11:52:40.194521   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.194529   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:40.194535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:40.194583   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:40.226376   57945 cri.go:89] found id: ""
	I0610 11:52:40.226404   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.226411   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:40.226416   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:40.226465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:40.257938   57945 cri.go:89] found id: ""
	I0610 11:52:40.257968   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.257978   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:40.257988   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:40.258004   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:40.327247   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:40.327276   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:40.327291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:40.404231   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:40.404263   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:40.441554   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:40.441585   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:40.491952   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:40.491987   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:39.939538   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:41.939639   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:39.425159   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:41.191808   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:43.695646   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:43.006217   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:43.019113   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:43.019187   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:43.053010   57945 cri.go:89] found id: ""
	I0610 11:52:43.053035   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.053045   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:43.053051   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:43.053115   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:43.086118   57945 cri.go:89] found id: ""
	I0610 11:52:43.086145   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.086156   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:43.086171   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:43.086235   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:43.117892   57945 cri.go:89] found id: ""
	I0610 11:52:43.117919   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.117929   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:43.117937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:43.118011   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:43.149751   57945 cri.go:89] found id: ""
	I0610 11:52:43.149777   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.149787   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:43.149795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:43.149855   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:43.184215   57945 cri.go:89] found id: ""
	I0610 11:52:43.184250   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.184261   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:43.184268   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:43.184332   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:43.219758   57945 cri.go:89] found id: ""
	I0610 11:52:43.219787   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.219797   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:43.219805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:43.219868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:43.250698   57945 cri.go:89] found id: ""
	I0610 11:52:43.250728   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.250738   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:43.250746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:43.250803   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:43.286526   57945 cri.go:89] found id: ""
	I0610 11:52:43.286556   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.286566   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:43.286576   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:43.286589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:43.362219   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:43.362255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:43.398332   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:43.398366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:43.449468   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:43.449502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:43.462346   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:43.462381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:43.539578   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:46.039720   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:46.052749   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:46.052821   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:46.093110   57945 cri.go:89] found id: ""
	I0610 11:52:46.093139   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.093147   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:46.093152   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:46.093219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:46.130885   57945 cri.go:89] found id: ""
	I0610 11:52:46.130916   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.130924   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:46.130930   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:46.130977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:46.167471   57945 cri.go:89] found id: ""
	I0610 11:52:46.167507   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.167524   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:46.167531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:46.167593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:46.204776   57945 cri.go:89] found id: ""
	I0610 11:52:46.204799   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.204807   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:46.204812   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:46.204860   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:46.244826   57945 cri.go:89] found id: ""
	I0610 11:52:46.244859   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.244869   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:46.244876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:46.244942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:46.281757   57945 cri.go:89] found id: ""
	I0610 11:52:46.281783   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.281791   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:46.281797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:46.281844   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:46.319517   57945 cri.go:89] found id: ""
	I0610 11:52:46.319546   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.319558   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:46.319566   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:46.319636   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:46.355806   57945 cri.go:89] found id: ""
	I0610 11:52:46.355835   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.355846   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:46.355858   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:46.355872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:46.433087   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:46.433131   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:46.468792   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:46.468829   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:46.517931   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:46.517969   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:46.530892   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:46.530935   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:46.592585   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:43.940733   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:46.440354   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:45.505281   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:48.577214   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:46.191520   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:48.691214   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:49.093662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:49.106539   57945 kubeadm.go:591] duration metric: took 4m4.396325615s to restartPrimaryControlPlane
	W0610 11:52:49.106625   57945 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 11:52:49.106658   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:52:48.441202   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:50.433923   57572 pod_ready.go:81] duration metric: took 4m0.000312516s for pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace to be "Ready" ...
	E0610 11:52:50.433960   57572 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0610 11:52:50.433982   57572 pod_ready.go:38] duration metric: took 4m5.113212783s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:52:50.434008   57572 kubeadm.go:591] duration metric: took 4m16.406085019s to restartPrimaryControlPlane
	W0610 11:52:50.434091   57572 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 11:52:50.434128   57572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:52:53.503059   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.396374472s)
	I0610 11:52:53.503148   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:52:53.518235   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:52:53.529298   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:52:53.539273   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:52:53.539297   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:52:53.539341   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:52:53.548285   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:52:53.548354   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:52:53.557659   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:52:53.569253   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:52:53.569330   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:52:53.579689   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.589800   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:52:53.589865   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.600324   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:52:53.610542   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:52:53.610612   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:52:53.620144   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:52:53.687195   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:52:53.687302   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:52:53.851035   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:52:53.851178   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:52:53.851305   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:52:54.037503   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:52:54.039523   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:52:54.039621   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:52:54.039718   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:52:54.039850   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:52:54.039959   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:52:54.040055   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:52:54.040135   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:52:54.040233   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:52:54.040506   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:52:54.040892   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:52:54.041344   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:52:54.041411   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:52:54.041507   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:52:54.151486   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:52:54.389555   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:52:54.507653   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:52:54.690886   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:52:54.708542   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:52:54.712251   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:52:54.712504   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:52:54.872755   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:52:50.691517   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:53.191418   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:54.874801   57945 out.go:204]   - Booting up control plane ...
	I0610 11:52:54.874978   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:52:54.883224   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:52:54.885032   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:52:54.886182   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:52:54.891030   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:52:54.661214   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:57.729160   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:55.691987   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:58.192548   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:00.692060   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:03.192673   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:03.809217   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:06.885176   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:05.692004   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:07.692545   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:12.961318   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:10.191064   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:12.192258   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:14.691564   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:16.033278   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:16.691670   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:18.691801   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:21.778313   57572 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.344150357s)
	I0610 11:53:21.778398   57572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:21.793960   57572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:53:21.803952   57572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:53:21.813685   57572 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:53:21.813709   57572 kubeadm.go:156] found existing configuration files:
	
	I0610 11:53:21.813758   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:53:21.823957   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:53:21.824027   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:53:21.833125   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:53:21.841834   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:53:21.841893   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:53:21.850999   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:53:21.859858   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:53:21.859920   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:53:21.869076   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:53:21.877079   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:53:21.877141   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:53:21.887614   57572 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:53:21.941932   57572 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 11:53:21.941987   57572 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:53:22.084118   57572 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:53:22.084219   57572 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:53:22.084310   57572 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:53:22.287685   57572 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:53:22.289568   57572 out.go:204]   - Generating certificates and keys ...
	I0610 11:53:22.289674   57572 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:53:22.289779   57572 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:53:22.289917   57572 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:53:22.290032   57572 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:53:22.290144   57572 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:53:22.290234   57572 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:53:22.290339   57572 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:53:22.290439   57572 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:53:22.290558   57572 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:53:22.290674   57572 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:53:22.290732   57572 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:53:22.290819   57572 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:53:22.354674   57572 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:53:22.573948   57572 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 11:53:22.805694   57572 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:53:22.914740   57572 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:53:23.218887   57572 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:53:23.221479   57572 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:53:23.223937   57572 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:53:22.113312   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:20.692241   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:23.192124   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:23.695912   56769 pod_ready.go:81] duration metric: took 4m0.01073501s for pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace to be "Ready" ...
	E0610 11:53:23.695944   56769 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0610 11:53:23.695954   56769 pod_ready.go:38] duration metric: took 4m2.412094982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:23.695972   56769 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:53:23.696001   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:23.696058   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:23.758822   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:23.758850   56769 cri.go:89] found id: ""
	I0610 11:53:23.758860   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:23.758921   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.765128   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:23.765198   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:23.798454   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:23.798483   56769 cri.go:89] found id: ""
	I0610 11:53:23.798494   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:23.798560   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.802985   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:23.803051   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:23.855781   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:23.855810   56769 cri.go:89] found id: ""
	I0610 11:53:23.855819   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:23.855873   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.860285   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:23.860363   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:23.901849   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:23.901868   56769 cri.go:89] found id: ""
	I0610 11:53:23.901878   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:23.901935   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.906116   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:23.906183   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:23.941376   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:23.941396   56769 cri.go:89] found id: ""
	I0610 11:53:23.941405   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:23.941463   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.947379   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:23.947450   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:23.984733   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:23.984757   56769 cri.go:89] found id: ""
	I0610 11:53:23.984766   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:23.984839   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.988701   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:23.988752   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:24.024067   56769 cri.go:89] found id: ""
	I0610 11:53:24.024094   56769 logs.go:276] 0 containers: []
	W0610 11:53:24.024103   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:24.024110   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:24.024170   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:24.058220   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:24.058250   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:24.058255   56769 cri.go:89] found id: ""
	I0610 11:53:24.058263   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:24.058321   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:24.062072   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:24.065706   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:24.065723   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:24.104622   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:24.104652   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:24.142432   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:24.142457   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:24.670328   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:24.670375   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:24.726557   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:24.726592   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:24.769111   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:24.769150   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:24.811199   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:24.811246   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:24.876489   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:24.876547   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:23.225694   57572 out.go:204]   - Booting up control plane ...
	I0610 11:53:23.225803   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:53:23.225898   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:53:23.226004   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:53:23.245138   57572 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:53:23.246060   57572 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:53:23.246121   57572 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:53:23.375562   57572 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 11:53:23.375689   57572 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 11:53:23.877472   57572 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.888048ms
	I0610 11:53:23.877560   57572 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 11:53:25.185274   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:28.879976   57572 kubeadm.go:309] [api-check] The API server is healthy after 5.002334008s
	I0610 11:53:28.902382   57572 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 11:53:28.924552   57572 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 11:53:28.956686   57572 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 11:53:28.956958   57572 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-298179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 11:53:28.971883   57572 kubeadm.go:309] [bootstrap-token] Using token: zdzp8m.ttyzgfzbws24vbk8
	I0610 11:53:24.916641   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:24.916824   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:24.980737   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:24.980779   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:24.998139   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:24.998163   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:25.113809   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:25.113839   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:25.168214   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:25.168254   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:27.708296   56769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:53:27.730996   56769 api_server.go:72] duration metric: took 4m14.155149231s to wait for apiserver process to appear ...
	I0610 11:53:27.731021   56769 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:53:27.731057   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:27.731116   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:27.767385   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:27.767411   56769 cri.go:89] found id: ""
	I0610 11:53:27.767420   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:27.767465   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.771646   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:27.771723   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:27.806969   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:27.806996   56769 cri.go:89] found id: ""
	I0610 11:53:27.807005   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:27.807060   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.811580   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:27.811655   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:27.850853   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:27.850879   56769 cri.go:89] found id: ""
	I0610 11:53:27.850888   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:27.850947   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.855284   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:27.855347   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:27.901228   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:27.901256   56769 cri.go:89] found id: ""
	I0610 11:53:27.901266   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:27.901322   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.905361   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:27.905428   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:27.943162   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:27.943187   56769 cri.go:89] found id: ""
	I0610 11:53:27.943197   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:27.943251   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.951934   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:27.952015   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:27.996288   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:27.996316   56769 cri.go:89] found id: ""
	I0610 11:53:27.996325   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:27.996381   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.000307   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:28.000378   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:28.036978   56769 cri.go:89] found id: ""
	I0610 11:53:28.037016   56769 logs.go:276] 0 containers: []
	W0610 11:53:28.037026   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:28.037033   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:28.037091   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:28.078338   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:28.078363   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:28.078368   56769 cri.go:89] found id: ""
	I0610 11:53:28.078377   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:28.078433   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.082899   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.087382   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:28.087416   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:28.123014   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:28.123051   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:28.186128   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:28.186160   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:28.314495   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:28.314539   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:28.358953   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:28.358981   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:28.394280   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:28.394306   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:28.450138   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:28.450172   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:28.851268   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:28.851307   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:28.909176   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:28.909202   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:28.927322   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:28.927359   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:28.983941   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:28.983971   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:29.023327   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:29.023352   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:29.063624   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:29.063655   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:28.973316   57572 out.go:204]   - Configuring RBAC rules ...
	I0610 11:53:28.973437   57572 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 11:53:28.979726   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 11:53:28.989075   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 11:53:28.999678   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 11:53:29.005717   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 11:53:29.014439   57572 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 11:53:29.292088   57572 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 11:53:29.734969   57572 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 11:53:30.288723   57572 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 11:53:30.289824   57572 kubeadm.go:309] 
	I0610 11:53:30.289918   57572 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 11:53:30.289930   57572 kubeadm.go:309] 
	I0610 11:53:30.290061   57572 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 11:53:30.290078   57572 kubeadm.go:309] 
	I0610 11:53:30.290107   57572 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 11:53:30.290191   57572 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 11:53:30.290268   57572 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 11:53:30.290316   57572 kubeadm.go:309] 
	I0610 11:53:30.290402   57572 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 11:53:30.290412   57572 kubeadm.go:309] 
	I0610 11:53:30.290481   57572 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 11:53:30.290494   57572 kubeadm.go:309] 
	I0610 11:53:30.290539   57572 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 11:53:30.290602   57572 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 11:53:30.290659   57572 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 11:53:30.290666   57572 kubeadm.go:309] 
	I0610 11:53:30.290749   57572 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 11:53:30.290816   57572 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 11:53:30.290823   57572 kubeadm.go:309] 
	I0610 11:53:30.290901   57572 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zdzp8m.ttyzgfzbws24vbk8 \
	I0610 11:53:30.291011   57572 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 11:53:30.291032   57572 kubeadm.go:309] 	--control-plane 
	I0610 11:53:30.291038   57572 kubeadm.go:309] 
	I0610 11:53:30.291113   57572 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 11:53:30.291120   57572 kubeadm.go:309] 
	I0610 11:53:30.291230   57572 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zdzp8m.ttyzgfzbws24vbk8 \
	I0610 11:53:30.291370   57572 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 11:53:30.291895   57572 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:53:30.291925   57572 cni.go:84] Creating CNI manager for ""
	I0610 11:53:30.291936   57572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:53:30.294227   57572 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 11:53:30.295470   57572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 11:53:30.306011   57572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 11:53:30.322832   57572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 11:53:30.322890   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:30.322960   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-298179 minikube.k8s.io/updated_at=2024_06_10T11_53_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=no-preload-298179 minikube.k8s.io/primary=true
	I0610 11:53:30.486915   57572 ops.go:34] apiserver oom_adj: -16
	I0610 11:53:30.487320   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:30.988103   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.488094   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.988314   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:32.487603   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.265182   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:31.597111   56769 api_server.go:253] Checking apiserver healthz at https://192.168.61.19:8443/healthz ...
	I0610 11:53:31.601589   56769 api_server.go:279] https://192.168.61.19:8443/healthz returned 200:
	ok
	I0610 11:53:31.602609   56769 api_server.go:141] control plane version: v1.30.1
	I0610 11:53:31.602631   56769 api_server.go:131] duration metric: took 3.871604169s to wait for apiserver health ...
	I0610 11:53:31.602639   56769 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:53:31.602663   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:31.602716   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:31.650102   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:31.650130   56769 cri.go:89] found id: ""
	I0610 11:53:31.650139   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:31.650197   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.654234   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:31.654299   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:31.690704   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:31.690736   56769 cri.go:89] found id: ""
	I0610 11:53:31.690750   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:31.690810   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.695139   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:31.695209   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:31.732593   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:31.732614   56769 cri.go:89] found id: ""
	I0610 11:53:31.732621   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:31.732667   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.737201   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:31.737277   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:31.774177   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:31.774219   56769 cri.go:89] found id: ""
	I0610 11:53:31.774239   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:31.774300   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.778617   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:31.778695   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:31.816633   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:31.816657   56769 cri.go:89] found id: ""
	I0610 11:53:31.816665   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:31.816715   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.820846   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:31.820928   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:31.857021   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:31.857052   56769 cri.go:89] found id: ""
	I0610 11:53:31.857062   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:31.857127   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.862825   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:31.862888   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:31.903792   56769 cri.go:89] found id: ""
	I0610 11:53:31.903817   56769 logs.go:276] 0 containers: []
	W0610 11:53:31.903825   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:31.903837   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:31.903885   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:31.942392   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:31.942414   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:31.942419   56769 cri.go:89] found id: ""
	I0610 11:53:31.942428   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:31.942481   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.949047   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.953590   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:31.953625   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:31.991926   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:31.991954   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:32.040857   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:32.040894   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:32.432680   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:32.432731   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:32.474819   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:32.474849   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:32.530152   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:32.530189   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:32.547698   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:32.547735   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:32.598580   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:32.598634   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:32.643864   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:32.643900   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:32.679085   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:32.679118   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:32.714247   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:32.714279   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:32.818508   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:32.818551   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:32.862390   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:32.862424   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:35.408169   56769 system_pods.go:59] 8 kube-system pods found
	I0610 11:53:35.408198   56769 system_pods.go:61] "coredns-7db6d8ff4d-7dlzb" [4b2618cd-b48c-44bd-a07d-4fe4585a14fa] Running
	I0610 11:53:35.408203   56769 system_pods.go:61] "etcd-embed-certs-832735" [4b7d413d-9a2a-4677-b279-5a6d39904679] Running
	I0610 11:53:35.408208   56769 system_pods.go:61] "kube-apiserver-embed-certs-832735" [7e11e03e-7b15-4e9b-8f9a-9a46d7aadd7e] Running
	I0610 11:53:35.408211   56769 system_pods.go:61] "kube-controller-manager-embed-certs-832735" [75aa996d-fdf3-4c32-b25d-03c7582b3502] Running
	I0610 11:53:35.408215   56769 system_pods.go:61] "kube-proxy-b7x2p" [fe1cd055-691f-46b1-ada7-7dded31d2308] Running
	I0610 11:53:35.408218   56769 system_pods.go:61] "kube-scheduler-embed-certs-832735" [b7a7fcfb-7ce9-4470-9052-79bc13029408] Running
	I0610 11:53:35.408223   56769 system_pods.go:61] "metrics-server-569cc877fc-5zg8j" [e979b4b0-356d-479d-990f-d9e6e46a1a9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:35.408233   56769 system_pods.go:61] "storage-provisioner" [47aa143e-3545-492d-ac93-e62f0076e0f4] Running
	I0610 11:53:35.408241   56769 system_pods.go:74] duration metric: took 3.805596332s to wait for pod list to return data ...
	I0610 11:53:35.408248   56769 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:53:35.410634   56769 default_sa.go:45] found service account: "default"
	I0610 11:53:35.410659   56769 default_sa.go:55] duration metric: took 2.405735ms for default service account to be created ...
	I0610 11:53:35.410667   56769 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:53:35.415849   56769 system_pods.go:86] 8 kube-system pods found
	I0610 11:53:35.415871   56769 system_pods.go:89] "coredns-7db6d8ff4d-7dlzb" [4b2618cd-b48c-44bd-a07d-4fe4585a14fa] Running
	I0610 11:53:35.415876   56769 system_pods.go:89] "etcd-embed-certs-832735" [4b7d413d-9a2a-4677-b279-5a6d39904679] Running
	I0610 11:53:35.415881   56769 system_pods.go:89] "kube-apiserver-embed-certs-832735" [7e11e03e-7b15-4e9b-8f9a-9a46d7aadd7e] Running
	I0610 11:53:35.415886   56769 system_pods.go:89] "kube-controller-manager-embed-certs-832735" [75aa996d-fdf3-4c32-b25d-03c7582b3502] Running
	I0610 11:53:35.415890   56769 system_pods.go:89] "kube-proxy-b7x2p" [fe1cd055-691f-46b1-ada7-7dded31d2308] Running
	I0610 11:53:35.415894   56769 system_pods.go:89] "kube-scheduler-embed-certs-832735" [b7a7fcfb-7ce9-4470-9052-79bc13029408] Running
	I0610 11:53:35.415900   56769 system_pods.go:89] "metrics-server-569cc877fc-5zg8j" [e979b4b0-356d-479d-990f-d9e6e46a1a9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:35.415906   56769 system_pods.go:89] "storage-provisioner" [47aa143e-3545-492d-ac93-e62f0076e0f4] Running
	I0610 11:53:35.415913   56769 system_pods.go:126] duration metric: took 5.241641ms to wait for k8s-apps to be running ...
	I0610 11:53:35.415919   56769 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:53:35.415957   56769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:35.431179   56769 system_svc.go:56] duration metric: took 15.252123ms WaitForService to wait for kubelet
	I0610 11:53:35.431209   56769 kubeadm.go:576] duration metric: took 4m21.85536785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:53:35.431233   56769 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:53:35.433918   56769 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:53:35.433941   56769 node_conditions.go:123] node cpu capacity is 2
	I0610 11:53:35.433955   56769 node_conditions.go:105] duration metric: took 2.718538ms to run NodePressure ...
	I0610 11:53:35.433966   56769 start.go:240] waiting for startup goroutines ...
	I0610 11:53:35.433973   56769 start.go:245] waiting for cluster config update ...
	I0610 11:53:35.433982   56769 start.go:254] writing updated cluster config ...
	I0610 11:53:35.434234   56769 ssh_runner.go:195] Run: rm -f paused
	I0610 11:53:35.483552   56769 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:53:35.485459   56769 out.go:177] * Done! kubectl is now configured to use "embed-certs-832735" cluster and "default" namespace by default
	I0610 11:53:34.892890   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:53:34.893019   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:34.893195   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:32.987749   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:33.488008   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:33.988419   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.488002   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.988349   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:35.487347   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:35.987479   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:36.487972   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:36.987442   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:37.488069   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.337236   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:39.893441   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:39.893640   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:37.987751   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:38.488215   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:38.987955   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:39.487394   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:39.987431   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:40.488304   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:40.987779   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:41.488123   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:41.987438   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:42.487799   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:42.987548   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:43.084050   57572 kubeadm.go:1107] duration metric: took 12.761214532s to wait for elevateKubeSystemPrivileges
	W0610 11:53:43.084095   57572 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 11:53:43.084109   57572 kubeadm.go:393] duration metric: took 5m9.100565129s to StartCluster
	I0610 11:53:43.084128   57572 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:53:43.084215   57572 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:53:43.085889   57572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:53:43.086151   57572 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 11:53:43.087762   57572 out.go:177] * Verifying Kubernetes components...
	I0610 11:53:43.086215   57572 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 11:53:43.087796   57572 addons.go:69] Setting storage-provisioner=true in profile "no-preload-298179"
	I0610 11:53:43.087800   57572 addons.go:69] Setting default-storageclass=true in profile "no-preload-298179"
	I0610 11:53:43.087819   57572 addons.go:234] Setting addon storage-provisioner=true in "no-preload-298179"
	W0610 11:53:43.087825   57572 addons.go:243] addon storage-provisioner should already be in state true
	I0610 11:53:43.087832   57572 addons.go:69] Setting metrics-server=true in profile "no-preload-298179"
	I0610 11:53:43.087847   57572 addons.go:234] Setting addon metrics-server=true in "no-preload-298179"
	W0610 11:53:43.087856   57572 addons.go:243] addon metrics-server should already be in state true
	I0610 11:53:43.087826   57572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-298179"
	I0610 11:53:43.087878   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.089535   57572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:53:43.087856   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.086356   57572 config.go:182] Loaded profile config "no-preload-298179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:53:43.088180   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.088182   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.089687   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.089713   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.089869   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.089895   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.104587   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0610 11:53:43.104609   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0610 11:53:43.104586   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34031
	I0610 11:53:43.105501   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105566   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105508   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105983   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.105997   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106134   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.106153   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106160   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.106184   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106350   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106526   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106568   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106692   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.106890   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.106918   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.107118   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.107141   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.109645   57572 addons.go:234] Setting addon default-storageclass=true in "no-preload-298179"
	W0610 11:53:43.109664   57572 addons.go:243] addon default-storageclass should already be in state true
	I0610 11:53:43.109692   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.109914   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.109939   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.123209   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0610 11:53:43.123703   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.124011   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0610 11:53:43.124351   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.124372   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.124393   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.124777   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.124847   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.124869   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.124998   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.125208   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.125941   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.125994   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.126208   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35175
	I0610 11:53:43.126555   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.126915   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.127030   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.127038   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.129007   57572 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0610 11:53:43.127369   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.130329   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 11:53:43.130349   57572 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 11:53:43.130372   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.130501   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.132699   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.134359   57572 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:53:40.417218   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:43.489341   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:43.135801   57572 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:53:43.135818   57572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 11:53:43.135837   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.134045   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.135918   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.135948   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.134772   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.136159   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.136318   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.136621   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.139217   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.139636   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.139658   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.140091   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.140568   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.140865   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.141293   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.145179   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0610 11:53:43.145813   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.146336   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.146358   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.146675   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.146987   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.148747   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.149026   57572 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 11:53:43.149042   57572 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 11:53:43.149064   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.152048   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.152550   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.152572   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.152780   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.153021   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.153256   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.153406   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.293079   57572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:53:43.323699   57572 node_ready.go:35] waiting up to 6m0s for node "no-preload-298179" to be "Ready" ...
	I0610 11:53:43.331922   57572 node_ready.go:49] node "no-preload-298179" has status "Ready":"True"
	I0610 11:53:43.331946   57572 node_ready.go:38] duration metric: took 8.212434ms for node "no-preload-298179" to be "Ready" ...
	I0610 11:53:43.331956   57572 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:43.338721   57572 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:43.399175   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 11:53:43.399196   57572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0610 11:53:43.432920   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 11:53:43.432986   57572 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 11:53:43.453982   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:53:43.457146   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 11:53:43.500871   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 11:53:43.500900   57572 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 11:53:43.601303   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 11:53:44.376916   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.376992   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377083   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377105   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377298   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377377   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.377383   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.377301   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377394   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377403   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377405   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.377414   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377421   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377608   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377634   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.379039   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.379090   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.379054   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.397328   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.397354   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.397626   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.397644   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.880094   57572 pod_ready.go:92] pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.880129   57572 pod_ready.go:81] duration metric: took 1.541384627s for pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.880149   57572 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.901625   57572 pod_ready.go:92] pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.901649   57572 pod_ready.go:81] duration metric: took 21.492207ms for pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.901658   57572 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.907530   57572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.306184796s)
	I0610 11:53:44.907587   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.907603   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.907929   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.907991   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.908005   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.908015   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.908262   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.908301   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.908305   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.908315   57572 addons.go:475] Verifying addon metrics-server=true in "no-preload-298179"
	I0610 11:53:44.910622   57572 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0610 11:53:44.911848   57572 addons.go:510] duration metric: took 1.825630817s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0610 11:53:44.922534   57572 pod_ready.go:92] pod "etcd-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.922562   57572 pod_ready.go:81] duration metric: took 20.896794ms for pod "etcd-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.922576   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.947545   57572 pod_ready.go:92] pod "kube-apiserver-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.947569   57572 pod_ready.go:81] duration metric: took 24.984822ms for pod "kube-apiserver-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.947578   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.956216   57572 pod_ready.go:92] pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.956240   57572 pod_ready.go:81] duration metric: took 8.656291ms for pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.956256   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fhndh" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.326936   57572 pod_ready.go:92] pod "kube-proxy-fhndh" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:45.326977   57572 pod_ready.go:81] duration metric: took 370.713967ms for pod "kube-proxy-fhndh" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.326987   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.733487   57572 pod_ready.go:92] pod "kube-scheduler-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:45.733514   57572 pod_ready.go:81] duration metric: took 406.51925ms for pod "kube-scheduler-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.733525   57572 pod_ready.go:38] duration metric: took 2.401559014s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:45.733544   57572 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:53:45.733612   57572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:53:45.754814   57572 api_server.go:72] duration metric: took 2.668628419s to wait for apiserver process to appear ...
	I0610 11:53:45.754838   57572 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:53:45.754867   57572 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I0610 11:53:45.763742   57572 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I0610 11:53:45.765314   57572 api_server.go:141] control plane version: v1.30.1
	I0610 11:53:45.765345   57572 api_server.go:131] duration metric: took 10.498726ms to wait for apiserver health ...
	I0610 11:53:45.765356   57572 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:53:45.930764   57572 system_pods.go:59] 9 kube-system pods found
	I0610 11:53:45.930792   57572 system_pods.go:61] "coredns-7db6d8ff4d-9mqrm" [6269d670-dffa-4526-8117-0b44df04554a] Running
	I0610 11:53:45.930796   57572 system_pods.go:61] "coredns-7db6d8ff4d-f622z" [16cb4de3-afa9-4e45-bc85-e51273973808] Running
	I0610 11:53:45.930800   57572 system_pods.go:61] "etcd-no-preload-298179" [088f1950-04c4-49e0-b3e2-fe8b5f398a08] Running
	I0610 11:53:45.930806   57572 system_pods.go:61] "kube-apiserver-no-preload-298179" [11bad142-25ff-4aa9-9d9e-4b7cbb053bdd] Running
	I0610 11:53:45.930810   57572 system_pods.go:61] "kube-controller-manager-no-preload-298179" [ac29a4d9-6e9c-44fd-bb39-477255b94d0c] Running
	I0610 11:53:45.930814   57572 system_pods.go:61] "kube-proxy-fhndh" [50f848e7-44f6-4ab1-bf94-3189733abca2] Running
	I0610 11:53:45.930818   57572 system_pods.go:61] "kube-scheduler-no-preload-298179" [8569c375-b9bd-4a46-91ea-c6372056e45d] Running
	I0610 11:53:45.930826   57572 system_pods.go:61] "metrics-server-569cc877fc-jp7dr" [21136ae9-40d8-4857-aca5-47e3fa3b7e9c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:45.930831   57572 system_pods.go:61] "storage-provisioner" [783f523c-4c21-4ae0-bc18-9c391e7342b0] Running
	I0610 11:53:45.930843   57572 system_pods.go:74] duration metric: took 165.479385ms to wait for pod list to return data ...
	I0610 11:53:45.930855   57572 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:53:46.127109   57572 default_sa.go:45] found service account: "default"
	I0610 11:53:46.127145   57572 default_sa.go:55] duration metric: took 196.279685ms for default service account to be created ...
	I0610 11:53:46.127154   57572 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:53:46.330560   57572 system_pods.go:86] 9 kube-system pods found
	I0610 11:53:46.330587   57572 system_pods.go:89] "coredns-7db6d8ff4d-9mqrm" [6269d670-dffa-4526-8117-0b44df04554a] Running
	I0610 11:53:46.330592   57572 system_pods.go:89] "coredns-7db6d8ff4d-f622z" [16cb4de3-afa9-4e45-bc85-e51273973808] Running
	I0610 11:53:46.330597   57572 system_pods.go:89] "etcd-no-preload-298179" [088f1950-04c4-49e0-b3e2-fe8b5f398a08] Running
	I0610 11:53:46.330601   57572 system_pods.go:89] "kube-apiserver-no-preload-298179" [11bad142-25ff-4aa9-9d9e-4b7cbb053bdd] Running
	I0610 11:53:46.330605   57572 system_pods.go:89] "kube-controller-manager-no-preload-298179" [ac29a4d9-6e9c-44fd-bb39-477255b94d0c] Running
	I0610 11:53:46.330608   57572 system_pods.go:89] "kube-proxy-fhndh" [50f848e7-44f6-4ab1-bf94-3189733abca2] Running
	I0610 11:53:46.330612   57572 system_pods.go:89] "kube-scheduler-no-preload-298179" [8569c375-b9bd-4a46-91ea-c6372056e45d] Running
	I0610 11:53:46.330619   57572 system_pods.go:89] "metrics-server-569cc877fc-jp7dr" [21136ae9-40d8-4857-aca5-47e3fa3b7e9c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:46.330623   57572 system_pods.go:89] "storage-provisioner" [783f523c-4c21-4ae0-bc18-9c391e7342b0] Running
	I0610 11:53:46.330631   57572 system_pods.go:126] duration metric: took 203.472984ms to wait for k8s-apps to be running ...
	I0610 11:53:46.330640   57572 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:53:46.330681   57572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:46.345084   57572 system_svc.go:56] duration metric: took 14.432966ms WaitForService to wait for kubelet
	I0610 11:53:46.345113   57572 kubeadm.go:576] duration metric: took 3.258932349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:53:46.345131   57572 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:53:46.528236   57572 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:53:46.528269   57572 node_conditions.go:123] node cpu capacity is 2
	I0610 11:53:46.528278   57572 node_conditions.go:105] duration metric: took 183.142711ms to run NodePressure ...
	I0610 11:53:46.528288   57572 start.go:240] waiting for startup goroutines ...
	I0610 11:53:46.528294   57572 start.go:245] waiting for cluster config update ...
	I0610 11:53:46.528303   57572 start.go:254] writing updated cluster config ...
	I0610 11:53:46.528561   57572 ssh_runner.go:195] Run: rm -f paused
	I0610 11:53:46.576348   57572 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:53:46.578434   57572 out.go:177] * Done! kubectl is now configured to use "no-preload-298179" cluster and "default" namespace by default
	I0610 11:53:49.894176   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:49.894368   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:49.573292   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:52.641233   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:58.721260   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:01.793270   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:07.873263   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:09.895012   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:09.895413   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:10.945237   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:17.025183   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:20.097196   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:26.177217   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:29.249267   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:35.329193   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:38.401234   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:44.481254   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:47.553200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:49.896623   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:49.896849   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:49.896868   57945 kubeadm.go:309] 
	I0610 11:54:49.896922   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:54:49.897030   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:54:49.897053   57945 kubeadm.go:309] 
	I0610 11:54:49.897121   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:54:49.897157   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:54:49.897308   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:54:49.897322   57945 kubeadm.go:309] 
	I0610 11:54:49.897493   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:54:49.897553   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:54:49.897612   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:54:49.897623   57945 kubeadm.go:309] 
	I0610 11:54:49.897755   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:54:49.897866   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:54:49.897876   57945 kubeadm.go:309] 
	I0610 11:54:49.898032   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:54:49.898139   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:54:49.898253   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:54:49.898357   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:54:49.898365   57945 kubeadm.go:309] 
	I0610 11:54:49.899094   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:54:49.899208   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:54:49.899302   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0610 11:54:49.899441   57945 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0610 11:54:49.899502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:54:50.366528   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:54:50.380107   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:54:50.390067   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:54:50.390089   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:54:50.390132   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:54:50.399159   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:54:50.399222   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:54:50.409346   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:54:50.420402   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:54:50.420458   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:54:50.432874   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.444351   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:54:50.444430   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.458175   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:54:50.468538   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:54:50.468611   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:54:50.480033   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:54:50.543600   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:54:50.543653   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:54:50.682810   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:54:50.682970   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:54:50.683117   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:54:50.877761   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:54:50.879686   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:54:50.879788   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:54:50.879881   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:54:50.880010   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:54:50.880075   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:54:50.880145   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:54:50.880235   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:54:50.880334   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:54:50.880543   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:54:50.880654   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:54:50.880771   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:54:50.880835   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:54:50.880912   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:54:51.326073   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:54:51.537409   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:54:51.721400   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:54:51.884882   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:54:51.904377   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:54:51.906470   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:54:51.906560   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:54:52.065800   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:54:52.067657   57945 out.go:204]   - Booting up control plane ...
	I0610 11:54:52.067807   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:54:52.069012   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:54:52.070508   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:54:52.071669   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:54:52.074772   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:54:53.633176   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:56.705245   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:02.785227   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:05.857320   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:11.941172   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:15.009275   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:21.089235   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:24.161264   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:32.077145   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:55:32.077542   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:32.077740   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:30.241187   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:33.313200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:37.078114   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:37.078357   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:39.393317   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:42.465223   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:47.078706   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:47.078906   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:48.545281   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:51.617229   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:57.697600   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:00.769294   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:07.079053   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:07.079285   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:06.849261   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:09.925249   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:16.001299   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:19.077309   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:25.153200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:28.225172   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:31.226848   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:56:31.226888   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:31.227225   60146 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281114"
	I0610 11:56:31.227250   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:31.227458   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:31.229187   60146 machine.go:97] duration metric: took 4m37.416418256s to provisionDockerMachine
	I0610 11:56:31.229224   60146 fix.go:56] duration metric: took 4m37.441343871s for fixHost
	I0610 11:56:31.229230   60146 start.go:83] releasing machines lock for "default-k8s-diff-port-281114", held for 4m37.44136358s
	W0610 11:56:31.229245   60146 start.go:713] error starting host: provision: host is not running
	W0610 11:56:31.229314   60146 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0610 11:56:31.229325   60146 start.go:728] Will try again in 5 seconds ...
	I0610 11:56:36.230954   60146 start.go:360] acquireMachinesLock for default-k8s-diff-port-281114: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:56:36.231068   60146 start.go:364] duration metric: took 60.465µs to acquireMachinesLock for "default-k8s-diff-port-281114"
	I0610 11:56:36.231091   60146 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:56:36.231096   60146 fix.go:54] fixHost starting: 
	I0610 11:56:36.231372   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:56:36.231392   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:56:36.247286   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0610 11:56:36.247715   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:56:36.248272   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:56:36.248292   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:56:36.248585   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:56:36.248787   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:36.248939   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 11:56:36.250776   60146 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281114: state=Stopped err=<nil>
	I0610 11:56:36.250796   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	W0610 11:56:36.250950   60146 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:56:36.252942   60146 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-281114" ...
	I0610 11:56:36.254300   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Start
	I0610 11:56:36.254515   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring networks are active...
	I0610 11:56:36.255281   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring network default is active
	I0610 11:56:36.255626   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring network mk-default-k8s-diff-port-281114 is active
	I0610 11:56:36.256059   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Getting domain xml...
	I0610 11:56:36.256819   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Creating domain...
	I0610 11:56:37.521102   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting to get IP...
	I0610 11:56:37.522061   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.522494   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.522553   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:37.522473   61276 retry.go:31] will retry after 220.098219ms: waiting for machine to come up
	I0610 11:56:37.743932   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.744482   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.744513   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:37.744440   61276 retry.go:31] will retry after 292.471184ms: waiting for machine to come up
	I0610 11:56:38.038937   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.039497   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.039526   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:38.039454   61276 retry.go:31] will retry after 446.869846ms: waiting for machine to come up
	I0610 11:56:38.488091   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.488684   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.488708   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:38.488635   61276 retry.go:31] will retry after 607.787706ms: waiting for machine to come up
	I0610 11:56:39.098375   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.098845   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.098875   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:39.098795   61276 retry.go:31] will retry after 610.636143ms: waiting for machine to come up
	I0610 11:56:39.710692   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.711170   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.711198   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:39.711106   61276 retry.go:31] will retry after 598.132053ms: waiting for machine to come up
	I0610 11:56:40.310889   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:40.311397   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:40.311420   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:40.311328   61276 retry.go:31] will retry after 1.191704846s: waiting for machine to come up
	I0610 11:56:41.505131   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:41.505601   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:41.505631   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:41.505572   61276 retry.go:31] will retry after 937.081207ms: waiting for machine to come up
	I0610 11:56:42.444793   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:42.445368   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:42.445396   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:42.445338   61276 retry.go:31] will retry after 1.721662133s: waiting for machine to come up
	I0610 11:56:47.078993   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:47.079439   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:47.079463   57945 kubeadm.go:309] 
	I0610 11:56:47.079513   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:56:47.079597   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:56:47.079629   57945 kubeadm.go:309] 
	I0610 11:56:47.079678   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:56:47.079718   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:56:47.079865   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:56:47.079876   57945 kubeadm.go:309] 
	I0610 11:56:47.080014   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:56:47.080077   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:56:47.080132   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:56:47.080151   57945 kubeadm.go:309] 
	I0610 11:56:47.080280   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:56:47.080377   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:56:47.080389   57945 kubeadm.go:309] 
	I0610 11:56:47.080543   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:56:47.080663   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:56:47.080769   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:56:47.080862   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:56:47.080874   57945 kubeadm.go:309] 
	I0610 11:56:47.081877   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:56:47.082023   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:56:47.082137   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0610 11:56:47.082233   57945 kubeadm.go:393] duration metric: took 8m2.423366884s to StartCluster
	I0610 11:56:47.082273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:56:47.082325   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:56:47.130548   57945 cri.go:89] found id: ""
	I0610 11:56:47.130585   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.130596   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:56:47.130603   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:56:47.130673   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:56:47.170087   57945 cri.go:89] found id: ""
	I0610 11:56:47.170124   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.170136   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:56:47.170144   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:56:47.170219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:56:47.210394   57945 cri.go:89] found id: ""
	I0610 11:56:47.210430   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.210442   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:56:47.210450   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:56:47.210532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:56:47.246002   57945 cri.go:89] found id: ""
	I0610 11:56:47.246032   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.246043   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:56:47.246051   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:56:47.246119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:56:47.282333   57945 cri.go:89] found id: ""
	I0610 11:56:47.282361   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.282369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:56:47.282375   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:56:47.282432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:56:47.316205   57945 cri.go:89] found id: ""
	I0610 11:56:47.316241   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.316254   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:56:47.316262   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:56:47.316323   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:56:47.356012   57945 cri.go:89] found id: ""
	I0610 11:56:47.356047   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.356060   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:56:47.356069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:56:47.356140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:56:47.404624   57945 cri.go:89] found id: ""
	I0610 11:56:47.404655   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.404666   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:56:47.404678   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:56:47.404694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:56:47.475236   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:56:47.475282   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:56:47.493382   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:56:47.493418   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:56:47.589894   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:56:47.589918   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:56:47.589934   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:56:47.726080   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:56:47.726123   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0610 11:56:47.770399   57945 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0610 11:56:47.770451   57945 out.go:239] * 
	W0610 11:56:47.770532   57945 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.770558   57945 out.go:239] * 
	W0610 11:56:47.771459   57945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:56:47.775172   57945 out.go:177] 
	W0610 11:56:47.776444   57945 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.776509   57945 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0610 11:56:47.776545   57945 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0610 11:56:47.778306   57945 out.go:177] 
	I0610 11:56:44.168288   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:44.168809   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:44.168832   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:44.168762   61276 retry.go:31] will retry after 2.181806835s: waiting for machine to come up
	I0610 11:56:46.352210   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:46.352736   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:46.352764   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:46.352688   61276 retry.go:31] will retry after 2.388171324s: waiting for machine to come up
	I0610 11:56:48.744345   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:48.744853   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:48.744890   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:48.744815   61276 retry.go:31] will retry after 2.54250043s: waiting for machine to come up
	I0610 11:56:51.288816   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:51.289222   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:51.289252   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:51.289190   61276 retry.go:31] will retry after 4.525493142s: waiting for machine to come up
	I0610 11:56:55.819862   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.820393   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Found IP for machine: 192.168.50.222
	I0610 11:56:55.820416   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Reserving static IP address...
	I0610 11:56:55.820433   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has current primary IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.820941   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-281114", mac: "52:54:00:23:06:35", ip: "192.168.50.222"} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.820984   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Reserved static IP address: 192.168.50.222
	I0610 11:56:55.821000   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | skip adding static IP to network mk-default-k8s-diff-port-281114 - found existing host DHCP lease matching {name: "default-k8s-diff-port-281114", mac: "52:54:00:23:06:35", ip: "192.168.50.222"}
	I0610 11:56:55.821012   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Getting to WaitForSSH function...
	I0610 11:56:55.821028   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for SSH to be available...
	I0610 11:56:55.823149   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.823499   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.823530   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.823680   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Using SSH client type: external
	I0610 11:56:55.823717   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa (-rw-------)
	I0610 11:56:55.823750   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 11:56:55.823764   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | About to run SSH command:
	I0610 11:56:55.823778   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | exit 0
	I0610 11:56:55.949264   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | SSH cmd err, output: <nil>: 
	I0610 11:56:55.949623   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetConfigRaw
	I0610 11:56:55.950371   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:55.953239   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.953602   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.953746   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.953874   60146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/config.json ...
	I0610 11:56:55.954172   60146 machine.go:94] provisionDockerMachine start ...
	I0610 11:56:55.954203   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:55.954415   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:55.956837   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.957344   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.957361   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.957521   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:55.957710   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:55.957887   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:55.958055   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:55.958211   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:55.958425   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:55.958445   60146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:56:56.061295   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 11:56:56.061331   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.061559   60146 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281114"
	I0610 11:56:56.061588   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.061787   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.064578   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.064938   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.064975   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.065131   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.065383   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.065565   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.065681   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.065874   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.066079   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.066094   60146 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-281114 && echo "default-k8s-diff-port-281114" | sudo tee /etc/hostname
	I0610 11:56:56.183602   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-281114
	
	I0610 11:56:56.183626   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.186613   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.186986   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.187016   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.187260   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.187472   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.187656   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.187812   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.187993   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.188192   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.188220   60146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-281114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-281114/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-281114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:56:56.298027   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:56:56.298057   60146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 11:56:56.298076   60146 buildroot.go:174] setting up certificates
	I0610 11:56:56.298083   60146 provision.go:84] configureAuth start
	I0610 11:56:56.298094   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.298385   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:56.301219   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.301584   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.301614   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.301816   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.304010   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.304412   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.304438   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.304593   60146 provision.go:143] copyHostCerts
	I0610 11:56:56.304668   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 11:56:56.304681   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:56:56.304765   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 11:56:56.304874   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 11:56:56.304884   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:56:56.304924   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 11:56:56.305040   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 11:56:56.305050   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:56:56.305084   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 11:56:56.305153   60146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-281114 san=[127.0.0.1 192.168.50.222 default-k8s-diff-port-281114 localhost minikube]
	I0610 11:56:56.411016   60146 provision.go:177] copyRemoteCerts
	I0610 11:56:56.411072   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:56:56.411093   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.413736   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.414075   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.414122   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.414292   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.414498   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.414686   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.414785   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:56.495039   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 11:56:56.519750   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:56:56.543202   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0610 11:56:56.566420   60146 provision.go:87] duration metric: took 268.326859ms to configureAuth
	I0610 11:56:56.566449   60146 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:56:56.566653   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:56:56.566732   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.569742   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.570135   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.570159   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.570411   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.570635   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.570815   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.570969   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.571169   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.571334   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.571350   60146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 11:56:56.846705   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 11:56:56.846727   60146 machine.go:97] duration metric: took 892.536744ms to provisionDockerMachine
	I0610 11:56:56.846741   60146 start.go:293] postStartSetup for "default-k8s-diff-port-281114" (driver="kvm2")
	I0610 11:56:56.846753   60146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:56:56.846795   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:56.847123   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:56:56.847150   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.849968   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.850300   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.850322   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.850518   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.850706   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.850889   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.851010   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:56.935027   60146 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:56:56.939465   60146 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:56:56.939489   60146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 11:56:56.939558   60146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 11:56:56.939641   60146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 11:56:56.939728   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:56:56.948993   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:56:56.974611   60146 start.go:296] duration metric: took 127.85527ms for postStartSetup
	I0610 11:56:56.974655   60146 fix.go:56] duration metric: took 20.74355824s for fixHost
	I0610 11:56:56.974673   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.978036   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.978438   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.978471   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.978612   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.978804   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.978984   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.979157   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.979343   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.979506   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.979524   60146 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:56:57.081416   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718020617.058533839
	
	I0610 11:56:57.081444   60146 fix.go:216] guest clock: 1718020617.058533839
	I0610 11:56:57.081454   60146 fix.go:229] Guest: 2024-06-10 11:56:57.058533839 +0000 UTC Remote: 2024-06-10 11:56:56.974658577 +0000 UTC m=+303.333936196 (delta=83.875262ms)
	I0610 11:56:57.081476   60146 fix.go:200] guest clock delta is within tolerance: 83.875262ms
	I0610 11:56:57.081482   60146 start.go:83] releasing machines lock for "default-k8s-diff-port-281114", held for 20.850403793s
	I0610 11:56:57.081499   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.081775   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:57.084904   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.085408   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.085442   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.085619   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086222   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086432   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086519   60146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:56:57.086571   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:57.086660   60146 ssh_runner.go:195] Run: cat /version.json
	I0610 11:56:57.086694   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:57.089544   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.089869   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.089904   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.089931   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.090091   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:57.090259   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:57.090362   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.090388   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.090444   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:57.090539   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:57.090613   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:57.090667   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:57.090806   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:57.090969   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:57.215361   60146 ssh_runner.go:195] Run: systemctl --version
	I0610 11:56:57.221479   60146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 11:56:57.363318   60146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 11:56:57.369389   60146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:56:57.369465   60146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:56:57.385195   60146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:56:57.385217   60146 start.go:494] detecting cgroup driver to use...
	I0610 11:56:57.385284   60146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:56:57.404923   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:56:57.420158   60146 docker.go:217] disabling cri-docker service (if available) ...
	I0610 11:56:57.420204   60146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 11:56:57.434385   60146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 11:56:57.448340   60146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 11:56:57.574978   60146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 11:56:57.714523   60146 docker.go:233] disabling docker service ...
	I0610 11:56:57.714620   60146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 11:56:57.729914   60146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 11:56:57.742557   60146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 11:56:57.885770   60146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 11:56:58.018120   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 11:56:58.031606   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:56:58.049312   60146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 11:56:58.049389   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.059800   60146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 11:56:58.059877   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.071774   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.082332   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.093474   60146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:56:58.104231   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.114328   60146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.131812   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.142612   60146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:56:58.152681   60146 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 11:56:58.152750   60146 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 11:56:58.166120   60146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:56:58.176281   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:56:58.306558   60146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 11:56:58.446379   60146 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 11:56:58.446460   60146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 11:56:58.452523   60146 start.go:562] Will wait 60s for crictl version
	I0610 11:56:58.452619   60146 ssh_runner.go:195] Run: which crictl
	I0610 11:56:58.456611   60146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:56:58.503496   60146 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 11:56:58.503581   60146 ssh_runner.go:195] Run: crio --version
	I0610 11:56:58.532834   60146 ssh_runner.go:195] Run: crio --version
	I0610 11:56:58.562697   60146 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 11:56:58.563974   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:58.566760   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:58.567107   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:58.567142   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:58.567408   60146 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0610 11:56:58.571671   60146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:56:58.584423   60146 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:56:58.584535   60146 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:56:58.584588   60146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:56:58.622788   60146 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0610 11:56:58.622862   60146 ssh_runner.go:195] Run: which lz4
	I0610 11:56:58.627561   60146 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 11:56:58.632560   60146 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 11:56:58.632595   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0610 11:56:59.943375   60146 crio.go:462] duration metric: took 1.315853744s to copy over tarball
	I0610 11:56:59.943444   60146 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 11:57:02.167265   60146 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.223791523s)
	I0610 11:57:02.167299   60146 crio.go:469] duration metric: took 2.223894548s to extract the tarball
	I0610 11:57:02.167308   60146 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 11:57:02.206288   60146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:57:02.250013   60146 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 11:57:02.250034   60146 cache_images.go:84] Images are preloaded, skipping loading
	I0610 11:57:02.250041   60146 kubeadm.go:928] updating node { 192.168.50.222 8444 v1.30.1 crio true true} ...
	I0610 11:57:02.250163   60146 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-281114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:57:02.250261   60146 ssh_runner.go:195] Run: crio config
	I0610 11:57:02.305797   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:57:02.305822   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:57:02.305838   60146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:57:02.305867   60146 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.222 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-281114 NodeName:default-k8s-diff-port-281114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 11:57:02.306030   60146 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.222
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-281114"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:57:02.306111   60146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:57:02.316522   60146 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:57:02.316585   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 11:57:02.326138   60146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0610 11:57:02.342685   60146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:57:02.359693   60146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0610 11:57:02.375771   60146 ssh_runner.go:195] Run: grep 192.168.50.222	control-plane.minikube.internal$ /etc/hosts
	I0610 11:57:02.379280   60146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:57:02.390797   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:57:02.511286   60146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:57:02.529051   60146 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114 for IP: 192.168.50.222
	I0610 11:57:02.529076   60146 certs.go:194] generating shared ca certs ...
	I0610 11:57:02.529095   60146 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:57:02.529281   60146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 11:57:02.529358   60146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 11:57:02.529373   60146 certs.go:256] generating profile certs ...
	I0610 11:57:02.529492   60146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/client.key
	I0610 11:57:02.529576   60146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.key.d35a2a33
	I0610 11:57:02.529626   60146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.key
	I0610 11:57:02.529769   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 11:57:02.529810   60146 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 11:57:02.529823   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 11:57:02.529857   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 11:57:02.529893   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 11:57:02.529924   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 11:57:02.529981   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:57:02.531166   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:57:02.570183   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:57:02.607339   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:57:02.653464   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 11:57:02.694329   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0610 11:57:02.722420   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 11:57:02.747321   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:57:02.772755   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:57:02.797241   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:57:02.821892   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 11:57:02.846925   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 11:57:02.870986   60146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:57:02.889088   60146 ssh_runner.go:195] Run: openssl version
	I0610 11:57:02.894820   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 11:57:02.906689   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.911048   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.911095   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.916866   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 11:57:02.928405   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 11:57:02.941254   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.945849   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.945899   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.951833   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:57:02.963661   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:57:02.975117   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.979667   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.979731   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.985212   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:57:02.997007   60146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:57:03.001498   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 11:57:03.007549   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 11:57:03.013717   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 11:57:03.019947   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 11:57:03.025890   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 11:57:03.031443   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 11:57:03.036936   60146 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:57:03.037056   60146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 11:57:03.037111   60146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:57:03.088497   60146 cri.go:89] found id: ""
	I0610 11:57:03.088555   60146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0610 11:57:03.099358   60146 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 11:57:03.099381   60146 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 11:57:03.099386   60146 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 11:57:03.099436   60146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 11:57:03.109092   60146 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 11:57:03.110113   60146 kubeconfig.go:125] found "default-k8s-diff-port-281114" server: "https://192.168.50.222:8444"
	I0610 11:57:03.112565   60146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 11:57:03.122338   60146 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.222
	I0610 11:57:03.122370   60146 kubeadm.go:1154] stopping kube-system containers ...
	I0610 11:57:03.122392   60146 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0610 11:57:03.122453   60146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:57:03.159369   60146 cri.go:89] found id: ""
	I0610 11:57:03.159470   60146 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 11:57:03.176704   60146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:57:03.186957   60146 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:57:03.186977   60146 kubeadm.go:156] found existing configuration files:
	
	I0610 11:57:03.187040   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0610 11:57:03.196318   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:57:03.196397   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:57:03.205630   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0610 11:57:03.214480   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:57:03.214538   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:57:03.223939   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0610 11:57:03.232372   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:57:03.232422   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:57:03.241846   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0610 11:57:03.251014   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:57:03.251092   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:57:03.260115   60146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:57:03.269792   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:03.388582   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.274314   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.473968   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.531884   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.618371   60146 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:57:04.618464   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:05.118733   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:05.619107   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:06.118937   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:06.138176   60146 api_server.go:72] duration metric: took 1.519803379s to wait for apiserver process to appear ...
	I0610 11:57:06.138205   60146 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:57:06.138223   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.201655   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 11:57:09.201680   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 11:57:09.201691   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.305898   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:09.305934   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:09.639319   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.644006   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:09.644041   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:10.138712   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:10.144989   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:10.145024   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:10.638505   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:10.642825   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:10.642861   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:11.138360   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:11.143062   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:11.143087   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:11.639058   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:11.643394   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:11.643419   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:12.139125   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:12.143425   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:12.143452   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:12.639074   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:12.644121   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 200:
	ok
	I0610 11:57:12.650538   60146 api_server.go:141] control plane version: v1.30.1
	I0610 11:57:12.650570   60146 api_server.go:131] duration metric: took 6.512357672s to wait for apiserver health ...
	I0610 11:57:12.650581   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:57:12.650590   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:57:12.652548   60146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 11:57:12.653918   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 11:57:12.664536   60146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 11:57:12.685230   60146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:57:12.694511   60146 system_pods.go:59] 8 kube-system pods found
	I0610 11:57:12.694546   60146 system_pods.go:61] "coredns-7db6d8ff4d-5ngxc" [26f3438c-a6a2-43d5-b79d-991752b4cc10] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 11:57:12.694561   60146 system_pods.go:61] "etcd-default-k8s-diff-port-281114" [e8a3dc04-a9f0-4670-8256-7a0a617958ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 11:57:12.694610   60146 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281114" [45080cf7-94ee-4c55-a3b4-cfa8d3b4edbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 11:57:12.694626   60146 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281114" [3f51cb0c-bb90-4847-acd4-0ed8a58608ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 11:57:12.694633   60146 system_pods.go:61] "kube-proxy-896ts" [13b994b7-8d0e-4e3d-9902-3bdd7a9ab949] Running
	I0610 11:57:12.694648   60146 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281114" [c205a8b5-e970-40ed-83d7-462781bcf41f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 11:57:12.694659   60146 system_pods.go:61] "metrics-server-569cc877fc-jhv6f" [60a2e6ad-714a-4c6d-b586-232d130397a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:57:12.694665   60146 system_pods.go:61] "storage-provisioner" [b54a4493-2c6d-4a5e-b74c-ba9863979688] Running
	I0610 11:57:12.694675   60146 system_pods.go:74] duration metric: took 9.424371ms to wait for pod list to return data ...
	I0610 11:57:12.694687   60146 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:57:12.697547   60146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:57:12.697571   60146 node_conditions.go:123] node cpu capacity is 2
	I0610 11:57:12.697583   60146 node_conditions.go:105] duration metric: took 2.887217ms to run NodePressure ...
	I0610 11:57:12.697633   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:12.966838   60146 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0610 11:57:12.971616   60146 kubeadm.go:733] kubelet initialised
	I0610 11:57:12.971641   60146 kubeadm.go:734] duration metric: took 4.781436ms waiting for restarted kubelet to initialise ...
	I0610 11:57:12.971649   60146 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:57:12.977162   60146 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:14.984174   60146 pod_ready.go:102] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:16.984365   60146 pod_ready.go:102] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:18.985423   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.985447   60146 pod_ready.go:81] duration metric: took 6.008259879s for pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.985459   60146 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.992228   60146 pod_ready.go:92] pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.992249   60146 pod_ready.go:81] duration metric: took 6.782049ms for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.992261   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.998328   60146 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.998354   60146 pod_ready.go:81] duration metric: took 6.080448ms for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.998363   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:21.004441   60146 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:23.005035   60146 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:23.505290   60146 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.505316   60146 pod_ready.go:81] duration metric: took 4.506946099s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.505326   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-896ts" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.510714   60146 pod_ready.go:92] pod "kube-proxy-896ts" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.510733   60146 pod_ready.go:81] duration metric: took 5.402289ms for pod "kube-proxy-896ts" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.510741   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.515120   60146 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.515138   60146 pod_ready.go:81] duration metric: took 4.391539ms for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.515145   60146 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:25.522456   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:28.021723   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:30.521428   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:32.521868   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:35.020800   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:37.021406   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:39.022230   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:41.026828   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:43.521675   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:46.021385   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:48.521085   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:50.521489   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:53.020867   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:55.021644   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:57.521383   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:59.521662   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:02.021864   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:04.521572   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:07.021580   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:09.521128   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:11.522117   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:14.021270   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:16.022304   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:18.521534   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:21.021061   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:23.021721   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:25.521779   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:28.021005   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:30.023892   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:32.521068   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:35.022247   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:37.022812   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:39.521194   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:41.521813   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:43.521847   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:46.021646   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:48.521791   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:51.020662   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:53.020752   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:55.021736   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:57.521819   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:00.021201   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:02.521497   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:05.021115   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:07.521673   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:10.022328   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:12.521244   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:15.020407   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:17.021142   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:19.021398   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:21.021949   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:23.022714   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:25.521324   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:27.523011   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:30.021380   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:32.021456   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:34.021713   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:36.523229   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:39.023269   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:41.521241   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:43.522882   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:46.021368   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:48.021781   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:50.022979   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:52.522357   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:55.022181   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:57.521630   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:00.022732   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:02.524425   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:05.021218   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:07.021736   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:09.521121   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:12.022455   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:14.023274   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:16.521626   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:19.021624   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:21.021728   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:23.022457   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:25.023406   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:27.523393   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:30.022146   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:32.520816   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:34.522050   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:36.522345   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:39.021544   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:41.022726   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:43.520941   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:45.521181   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:47.522257   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:49.522829   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:51.523346   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:54.020982   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:56.021367   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:58.021467   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:00.021643   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:02.021791   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:04.021864   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:06.021968   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:08.521556   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:10.521588   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:12.521870   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:15.025925   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:17.523018   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:20.022903   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:22.521723   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:23.515523   60146 pod_ready.go:81] duration metric: took 4m0.000361045s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" ...
	E0610 12:01:23.515558   60146 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0610 12:01:23.515582   60146 pod_ready.go:38] duration metric: took 4m10.543923644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:01:23.515614   60146 kubeadm.go:591] duration metric: took 4m20.4162222s to restartPrimaryControlPlane
	W0610 12:01:23.515715   60146 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 12:01:23.515751   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 12:01:54.687867   60146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.172093979s)
	I0610 12:01:54.687931   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:01:54.704702   60146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 12:01:54.714940   60146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 12:01:54.724675   60146 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:01:54.724702   60146 kubeadm.go:156] found existing configuration files:
	
	I0610 12:01:54.724749   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0610 12:01:54.734652   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:01:54.734726   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 12:01:54.744642   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0610 12:01:54.755297   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:01:54.755375   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 12:01:54.765800   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0610 12:01:54.775568   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:01:54.775636   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 12:01:54.785076   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0610 12:01:54.793645   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:01:54.793706   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 12:01:54.803137   60146 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 12:01:54.855022   60146 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 12:01:54.855094   60146 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 12:01:54.995399   60146 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 12:01:54.995511   60146 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 12:01:54.995622   60146 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 12:01:55.194136   60146 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:01:55.196296   60146 out.go:204]   - Generating certificates and keys ...
	I0610 12:01:55.196396   60146 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 12:01:55.196475   60146 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 12:01:55.196575   60146 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 12:01:55.196680   60146 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 12:01:55.196792   60146 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 12:01:55.196874   60146 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 12:01:55.196984   60146 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 12:01:55.197077   60146 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 12:01:55.197158   60146 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 12:01:55.197231   60146 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 12:01:55.197265   60146 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 12:01:55.197320   60146 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:01:55.299197   60146 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:01:55.490367   60146 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:01:55.751377   60146 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:01:55.863144   60146 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:01:56.112395   60146 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:01:56.113059   60146 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:01:56.118410   60146 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:01:56.120277   60146 out.go:204]   - Booting up control plane ...
	I0610 12:01:56.120416   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:01:56.120503   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:01:56.120565   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:01:56.138057   60146 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:01:56.138509   60146 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:01:56.138563   60146 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 12:01:56.263559   60146 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 12:01:56.263688   60146 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:01:57.264829   60146 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001316355s
	I0610 12:01:57.264927   60146 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 12:02:02.267632   60146 kubeadm.go:309] [api-check] The API server is healthy after 5.001644567s
	I0610 12:02:02.282693   60146 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 12:02:02.305741   60146 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 12:02:02.341283   60146 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 12:02:02.341527   60146 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-281114 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 12:02:02.355256   60146 kubeadm.go:309] [bootstrap-token] Using token: mkpvnr.wlx5xvctjlg8pi72
	I0610 12:02:02.356920   60146 out.go:204]   - Configuring RBAC rules ...
	I0610 12:02:02.357052   60146 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 12:02:02.367773   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 12:02:02.376921   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 12:02:02.386582   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 12:02:02.390887   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 12:02:02.399245   60146 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 12:02:02.674008   60146 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 12:02:03.137504   60146 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 12:02:03.673560   60146 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 12:02:03.674588   60146 kubeadm.go:309] 
	I0610 12:02:03.674677   60146 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 12:02:03.674694   60146 kubeadm.go:309] 
	I0610 12:02:03.674774   60146 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 12:02:03.674784   60146 kubeadm.go:309] 
	I0610 12:02:03.674813   60146 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 12:02:03.674924   60146 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 12:02:03.675014   60146 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 12:02:03.675026   60146 kubeadm.go:309] 
	I0610 12:02:03.675128   60146 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 12:02:03.675150   60146 kubeadm.go:309] 
	I0610 12:02:03.675225   60146 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 12:02:03.675234   60146 kubeadm.go:309] 
	I0610 12:02:03.675344   60146 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 12:02:03.675460   60146 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 12:02:03.675587   60146 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 12:02:03.677879   60146 kubeadm.go:309] 
	I0610 12:02:03.677961   60146 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 12:02:03.678057   60146 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 12:02:03.678068   60146 kubeadm.go:309] 
	I0610 12:02:03.678160   60146 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token mkpvnr.wlx5xvctjlg8pi72 \
	I0610 12:02:03.678304   60146 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 12:02:03.678338   60146 kubeadm.go:309] 	--control-plane 
	I0610 12:02:03.678348   60146 kubeadm.go:309] 
	I0610 12:02:03.678446   60146 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 12:02:03.678460   60146 kubeadm.go:309] 
	I0610 12:02:03.678580   60146 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token mkpvnr.wlx5xvctjlg8pi72 \
	I0610 12:02:03.678726   60146 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 12:02:03.678869   60146 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:02:03.678886   60146 cni.go:84] Creating CNI manager for ""
	I0610 12:02:03.678896   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 12:02:03.681019   60146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 12:02:03.682415   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 12:02:03.693028   60146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 12:02:03.711436   60146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 12:02:03.711534   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:03.711611   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-281114 minikube.k8s.io/updated_at=2024_06_10T12_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=default-k8s-diff-port-281114 minikube.k8s.io/primary=true
	I0610 12:02:03.888463   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:03.926946   60146 ops.go:34] apiserver oom_adj: -16
	I0610 12:02:04.389105   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:04.888545   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:05.389096   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:05.888853   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:06.389522   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:06.889491   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:07.389417   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:07.889485   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:08.388869   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:08.889480   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:09.389130   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:09.889052   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:10.389053   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:10.889177   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:11.388985   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:11.889405   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:12.388805   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:12.889139   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:13.389072   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:13.888843   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:14.389349   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:14.888798   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:15.388800   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:15.888491   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:16.389394   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:16.889175   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:17.007766   60146 kubeadm.go:1107] duration metric: took 13.296278569s to wait for elevateKubeSystemPrivileges
	W0610 12:02:17.007804   60146 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 12:02:17.007813   60146 kubeadm.go:393] duration metric: took 5m13.970894294s to StartCluster
	I0610 12:02:17.007835   60146 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:02:17.007914   60146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 12:02:17.009456   60146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:02:17.009751   60146 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 12:02:17.011669   60146 out.go:177] * Verifying Kubernetes components...
	I0610 12:02:17.009833   60146 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 12:02:17.011705   60146 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013481   60146 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.013496   60146 addons.go:243] addon storage-provisioner should already be in state true
	I0610 12:02:17.013539   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.011715   60146 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013612   60146 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.013628   60146 addons.go:243] addon metrics-server should already be in state true
	I0610 12:02:17.013669   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.009996   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:02:17.011717   60146 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013437   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:02:17.013792   60146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-281114"
	I0610 12:02:17.013961   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014009   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.014043   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014066   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.014174   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014211   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.030604   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43907
	I0610 12:02:17.031126   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.031701   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.031729   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.032073   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.032272   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.034510   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0610 12:02:17.034557   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42127
	I0610 12:02:17.034950   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.035130   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.035437   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.035459   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.035888   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.035968   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.035986   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.036820   60146 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.036839   60146 addons.go:243] addon default-storageclass should already be in state true
	I0610 12:02:17.036865   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.037323   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.037345   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.038068   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.038408   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.038428   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.039402   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.039436   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.052901   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0610 12:02:17.053390   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.053936   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.053959   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.054226   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
	I0610 12:02:17.054303   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.054569   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.054905   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.054933   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.055019   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.055040   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.055448   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.055637   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.057623   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.059785   60146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:02:17.058684   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0610 12:02:17.060310   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.061277   60146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:02:17.061292   60146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 12:02:17.061311   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.061738   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.061762   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.062097   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.062405   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.064169   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.065635   60146 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0610 12:02:17.065251   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.066901   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 12:02:17.065677   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.066921   60146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 12:02:17.066945   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.066952   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.065921   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.067144   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.067267   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.067437   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.070722   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.071110   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.071125   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.071422   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.071582   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.071714   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.072048   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.073784   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46447
	I0610 12:02:17.074157   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.074645   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.074659   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.074986   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.075129   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.076879   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.077138   60146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 12:02:17.077153   60146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 12:02:17.077170   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.080253   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.080667   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.080698   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.080862   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.081088   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.081280   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.081466   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.226805   60146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:02:17.257188   60146 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-281114" to be "Ready" ...
	I0610 12:02:17.266803   60146 node_ready.go:49] node "default-k8s-diff-port-281114" has status "Ready":"True"
	I0610 12:02:17.266829   60146 node_ready.go:38] duration metric: took 9.610473ms for node "default-k8s-diff-port-281114" to be "Ready" ...
	I0610 12:02:17.266840   60146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:02:17.273132   60146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:17.327416   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 12:02:17.327442   60146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0610 12:02:17.366670   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:02:17.367685   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 12:02:17.378833   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 12:02:17.378858   60146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 12:02:17.436533   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 12:02:17.436558   60146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 12:02:17.490426   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 12:02:18.279491   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.279516   60146 pod_ready.go:81] duration metric: took 1.006353706s for pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.279527   60146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.286003   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.286024   60146 pod_ready.go:81] duration metric: took 6.488693ms for pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.286036   60146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.295995   60146 pod_ready.go:92] pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.296015   60146 pod_ready.go:81] duration metric: took 9.973573ms for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.296024   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.302383   60146 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.302407   60146 pod_ready.go:81] duration metric: took 6.376673ms for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.302418   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.421208   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.054498973s)
	I0610 12:02:18.421244   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.053533062s)
	I0610 12:02:18.421270   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421278   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421285   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421290   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421645   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.421691   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.421706   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.421715   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421717   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.421723   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421726   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.421734   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421743   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.422083   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.422103   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.422122   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.422123   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.422132   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.453377   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.453408   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.453803   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.453806   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.453831   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.475839   60146 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.475867   60146 pod_ready.go:81] duration metric: took 173.440125ms for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.475881   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wh756" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.673586   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183120727s)
	I0610 12:02:18.673646   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.673662   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.673961   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.674001   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.674010   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.674020   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.674045   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.674315   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.674356   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.674365   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.674376   60146 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-281114"
	I0610 12:02:18.676402   60146 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0610 12:02:18.677734   60146 addons.go:510] duration metric: took 1.667897142s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0610 12:02:19.660297   60146 pod_ready.go:92] pod "kube-proxy-wh756" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:19.660327   60146 pod_ready.go:81] duration metric: took 1.184438894s for pod "kube-proxy-wh756" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:19.660340   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:20.060583   60146 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:20.060607   60146 pod_ready.go:81] duration metric: took 400.25949ms for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:20.060616   60146 pod_ready.go:38] duration metric: took 2.793765456s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:02:20.060634   60146 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:02:20.060693   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:02:20.076416   60146 api_server.go:72] duration metric: took 3.066630137s to wait for apiserver process to appear ...
	I0610 12:02:20.076441   60146 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:02:20.076462   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 12:02:20.081614   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 200:
	ok
	I0610 12:02:20.082567   60146 api_server.go:141] control plane version: v1.30.1
	I0610 12:02:20.082589   60146 api_server.go:131] duration metric: took 6.142085ms to wait for apiserver health ...
	I0610 12:02:20.082597   60146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:02:20.263766   60146 system_pods.go:59] 9 kube-system pods found
	I0610 12:02:20.263803   60146 system_pods.go:61] "coredns-7db6d8ff4d-5fgtk" [03d948ca-122a-4042-8371-8a9422c187bc] Running
	I0610 12:02:20.263808   60146 system_pods.go:61] "coredns-7db6d8ff4d-fg8xx" [e91ae09c-8821-4843-8c0d-ea734433c213] Running
	I0610 12:02:20.263815   60146 system_pods.go:61] "etcd-default-k8s-diff-port-281114" [110985f7-c57e-453d-8bda-c5104d879eb4] Running
	I0610 12:02:20.263821   60146 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281114" [e62181ca-648e-4d5f-b2a7-00bed06f3bd2] Running
	I0610 12:02:20.263827   60146 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281114" [109f02bd-8c9c-40f6-98e8-5cf2b6d97deb] Running
	I0610 12:02:20.263832   60146 system_pods.go:61] "kube-proxy-wh756" [57cbf3d6-c149-4ae1-84d3-6df6a53ea091] Running
	I0610 12:02:20.263838   60146 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281114" [00889b82-f4fc-4a98-86cd-ab1028dc4461] Running
	I0610 12:02:20.263848   60146 system_pods.go:61] "metrics-server-569cc877fc-j58s9" [f1c91612-b967-447e-bc71-13ba0d11864b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 12:02:20.263854   60146 system_pods.go:61] "storage-provisioner" [8df0a38c-5e91-4b10-a303-c4eff9545669] Running
	I0610 12:02:20.263866   60146 system_pods.go:74] duration metric: took 181.261717ms to wait for pod list to return data ...
	I0610 12:02:20.263878   60146 default_sa.go:34] waiting for default service account to be created ...
	I0610 12:02:20.460812   60146 default_sa.go:45] found service account: "default"
	I0610 12:02:20.460848   60146 default_sa.go:55] duration metric: took 196.961501ms for default service account to be created ...
	I0610 12:02:20.460860   60146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 12:02:20.664565   60146 system_pods.go:86] 9 kube-system pods found
	I0610 12:02:20.664591   60146 system_pods.go:89] "coredns-7db6d8ff4d-5fgtk" [03d948ca-122a-4042-8371-8a9422c187bc] Running
	I0610 12:02:20.664596   60146 system_pods.go:89] "coredns-7db6d8ff4d-fg8xx" [e91ae09c-8821-4843-8c0d-ea734433c213] Running
	I0610 12:02:20.664601   60146 system_pods.go:89] "etcd-default-k8s-diff-port-281114" [110985f7-c57e-453d-8bda-c5104d879eb4] Running
	I0610 12:02:20.664606   60146 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-281114" [e62181ca-648e-4d5f-b2a7-00bed06f3bd2] Running
	I0610 12:02:20.664610   60146 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-281114" [109f02bd-8c9c-40f6-98e8-5cf2b6d97deb] Running
	I0610 12:02:20.664614   60146 system_pods.go:89] "kube-proxy-wh756" [57cbf3d6-c149-4ae1-84d3-6df6a53ea091] Running
	I0610 12:02:20.664618   60146 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-281114" [00889b82-f4fc-4a98-86cd-ab1028dc4461] Running
	I0610 12:02:20.664626   60146 system_pods.go:89] "metrics-server-569cc877fc-j58s9" [f1c91612-b967-447e-bc71-13ba0d11864b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 12:02:20.664631   60146 system_pods.go:89] "storage-provisioner" [8df0a38c-5e91-4b10-a303-c4eff9545669] Running
	I0610 12:02:20.664640   60146 system_pods.go:126] duration metric: took 203.773693ms to wait for k8s-apps to be running ...
	I0610 12:02:20.664649   60146 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:02:20.664690   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:02:20.681388   60146 system_svc.go:56] duration metric: took 16.731528ms WaitForService to wait for kubelet
	I0610 12:02:20.681411   60146 kubeadm.go:576] duration metric: took 3.671630148s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:02:20.681432   60146 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:02:20.861346   60146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:02:20.861369   60146 node_conditions.go:123] node cpu capacity is 2
	I0610 12:02:20.861379   60146 node_conditions.go:105] duration metric: took 179.94199ms to run NodePressure ...
	I0610 12:02:20.861390   60146 start.go:240] waiting for startup goroutines ...
	I0610 12:02:20.861396   60146 start.go:245] waiting for cluster config update ...
	I0610 12:02:20.861405   60146 start.go:254] writing updated cluster config ...
	I0610 12:02:20.861658   60146 ssh_runner.go:195] Run: rm -f paused
	I0610 12:02:20.911134   60146 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 12:02:20.913129   60146 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-281114" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 10 12:02:47 no-preload-298179 crio[723]: time="2024-06-10 12:02:47.922691817Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020967922669277,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da18830b-df53-47bb-8402-b8679387effa name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:47 no-preload-298179 crio[723]: time="2024-06-10 12:02:47.923166906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=440dbd4e-bce4-4d55-952a-67146a76e965 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:47 no-preload-298179 crio[723]: time="2024-06-10 12:02:47.923233981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=440dbd4e-bce4-4d55-952a-67146a76e965 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:47 no-preload-298179 crio[723]: time="2024-06-10 12:02:47.923405465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:683e59037f5932468d2405bbd3fd52d77ce5ad62e1759892e8d937191e057437,PodSandboxId:deee4653c7072b7c169a0567c8244abb526ea2a11a4098043cf947cc0401f0f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020424861177652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 783f523c-4c21-4ae0-bc18-9c391e7342b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1f746830,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2e411906f14885e5c4a5b5164f742d7283e55c02bc310f8571b5ab021ce97e,PodSandboxId:66fc0cde87620c4b46299ad7ab86b3173f3a617d0a268e2cd36b76691ca25c43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424338795209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f622z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16cb4de3-afa9-4e45-bc85-e51273973808,},Annotations:map[string]string{io.kubernetes.container.hash: 7a12602a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a066b8539f611e071a9acfaeb6cc35563e3b55b5b270b17884aa8c2432be6a3,PodSandboxId:714f0a77adfbb94747c437f5a2a45f6ffee84236ddbe67f02786e139d992252e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424386935854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9mqrm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
69d670-dffa-4526-8117-0b44df04554a,},Annotations:map[string]string{io.kubernetes.container.hash: c5356ac7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fddfd1132be797ed9025b8977067f68a9016051286041ed4ee3c38d3225136cd,PodSandboxId:6cc15e22a4c6ea6bfddd088767d080ae4f8dc0dc95bbbf793e0d9c05ab802627,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718020423442794498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fhndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f848e7-44f6-4ab1-bf94-3189733abca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7a55cea4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782d58441abcdd0791ae72b44e699f9f6a4c30867e4aec8eca2a0338dbaf33d0,PodSandboxId:72debdf12a31460f1dd1edbbb4834b7f471970978d402dc3360db0d240cfc374,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020404548079400,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af991281f76a9c4d496d9158234dfc48,},Annotations:map[string]string{io.kubernetes.container.hash: 29667b85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba6ee23d7a88b9d4aae2cad62cb70292ab5ff9a7f85aa6cef1aa90959382e9b,PodSandboxId:bb9cc9dfa0362795f02853309767ab44429a06bbf87b8887ee52eb4d7f379e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020404524877296,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7787f0f4433798238ba6c479ed8cbe,},Annotations:map[string]string{io.kubernetes.container.hash: 44495568,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a7b43ca72a0fe56bf21afcae51fd55480c85f73a08bd848fd2884f99005058,PodSandboxId:9d35d2e40c9b05e62daeb3ac27d37eaa125bbd4abd15f4321c57fa3cb327f4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020404512735021,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbed13fc899dffe5489a781ad246db8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d5466fc0761ffafa56f8b58377652ecea0499411a50a90195f70039ad5ab9b,PodSandboxId:4b4cb53ff65abad35f6a102515e7e9a5c01be3e536f533a75a32ca4259afbb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020404424265060,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dba9ebe3c0b0b9d3dac53b9b8aedb7,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=440dbd4e-bce4-4d55-952a-67146a76e965 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:47 no-preload-298179 crio[723]: time="2024-06-10 12:02:47.966616601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=139bdc02-048e-4a89-9d4d-5342d82f0bba name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:47 no-preload-298179 crio[723]: time="2024-06-10 12:02:47.966705896Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=139bdc02-048e-4a89-9d4d-5342d82f0bba name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:47 no-preload-298179 crio[723]: time="2024-06-10 12:02:47.968239649Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18a28cbb-0041-4721-b9a7-278e5e190f0e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:47 no-preload-298179 crio[723]: time="2024-06-10 12:02:47.968674724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020967968577672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18a28cbb-0041-4721-b9a7-278e5e190f0e name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:47 no-preload-298179 crio[723]: time="2024-06-10 12:02:47.969480354Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6ccaed38-d6bd-47a1-a615-548a8278b33d name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:47 no-preload-298179 crio[723]: time="2024-06-10 12:02:47.969547536Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6ccaed38-d6bd-47a1-a615-548a8278b33d name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:47 no-preload-298179 crio[723]: time="2024-06-10 12:02:47.969799354Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:683e59037f5932468d2405bbd3fd52d77ce5ad62e1759892e8d937191e057437,PodSandboxId:deee4653c7072b7c169a0567c8244abb526ea2a11a4098043cf947cc0401f0f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020424861177652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 783f523c-4c21-4ae0-bc18-9c391e7342b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1f746830,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2e411906f14885e5c4a5b5164f742d7283e55c02bc310f8571b5ab021ce97e,PodSandboxId:66fc0cde87620c4b46299ad7ab86b3173f3a617d0a268e2cd36b76691ca25c43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424338795209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f622z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16cb4de3-afa9-4e45-bc85-e51273973808,},Annotations:map[string]string{io.kubernetes.container.hash: 7a12602a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a066b8539f611e071a9acfaeb6cc35563e3b55b5b270b17884aa8c2432be6a3,PodSandboxId:714f0a77adfbb94747c437f5a2a45f6ffee84236ddbe67f02786e139d992252e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424386935854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9mqrm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
69d670-dffa-4526-8117-0b44df04554a,},Annotations:map[string]string{io.kubernetes.container.hash: c5356ac7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fddfd1132be797ed9025b8977067f68a9016051286041ed4ee3c38d3225136cd,PodSandboxId:6cc15e22a4c6ea6bfddd088767d080ae4f8dc0dc95bbbf793e0d9c05ab802627,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718020423442794498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fhndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f848e7-44f6-4ab1-bf94-3189733abca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7a55cea4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782d58441abcdd0791ae72b44e699f9f6a4c30867e4aec8eca2a0338dbaf33d0,PodSandboxId:72debdf12a31460f1dd1edbbb4834b7f471970978d402dc3360db0d240cfc374,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020404548079400,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af991281f76a9c4d496d9158234dfc48,},Annotations:map[string]string{io.kubernetes.container.hash: 29667b85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba6ee23d7a88b9d4aae2cad62cb70292ab5ff9a7f85aa6cef1aa90959382e9b,PodSandboxId:bb9cc9dfa0362795f02853309767ab44429a06bbf87b8887ee52eb4d7f379e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020404524877296,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7787f0f4433798238ba6c479ed8cbe,},Annotations:map[string]string{io.kubernetes.container.hash: 44495568,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a7b43ca72a0fe56bf21afcae51fd55480c85f73a08bd848fd2884f99005058,PodSandboxId:9d35d2e40c9b05e62daeb3ac27d37eaa125bbd4abd15f4321c57fa3cb327f4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020404512735021,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbed13fc899dffe5489a781ad246db8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d5466fc0761ffafa56f8b58377652ecea0499411a50a90195f70039ad5ab9b,PodSandboxId:4b4cb53ff65abad35f6a102515e7e9a5c01be3e536f533a75a32ca4259afbb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020404424265060,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dba9ebe3c0b0b9d3dac53b9b8aedb7,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6ccaed38-d6bd-47a1-a615-548a8278b33d name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.014746612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98a21051-ad08-43a9-893a-56363f3b9977 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.014978160Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98a21051-ad08-43a9-893a-56363f3b9977 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.016327920Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0dc6e5e4-47ec-49d1-a2b8-823131610166 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.016663753Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020968016640489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0dc6e5e4-47ec-49d1-a2b8-823131610166 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.017158756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86f1d49e-201b-49e6-8b81-5e41901b954b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.017225544Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86f1d49e-201b-49e6-8b81-5e41901b954b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.017404976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:683e59037f5932468d2405bbd3fd52d77ce5ad62e1759892e8d937191e057437,PodSandboxId:deee4653c7072b7c169a0567c8244abb526ea2a11a4098043cf947cc0401f0f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020424861177652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 783f523c-4c21-4ae0-bc18-9c391e7342b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1f746830,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2e411906f14885e5c4a5b5164f742d7283e55c02bc310f8571b5ab021ce97e,PodSandboxId:66fc0cde87620c4b46299ad7ab86b3173f3a617d0a268e2cd36b76691ca25c43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424338795209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f622z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16cb4de3-afa9-4e45-bc85-e51273973808,},Annotations:map[string]string{io.kubernetes.container.hash: 7a12602a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a066b8539f611e071a9acfaeb6cc35563e3b55b5b270b17884aa8c2432be6a3,PodSandboxId:714f0a77adfbb94747c437f5a2a45f6ffee84236ddbe67f02786e139d992252e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424386935854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9mqrm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
69d670-dffa-4526-8117-0b44df04554a,},Annotations:map[string]string{io.kubernetes.container.hash: c5356ac7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fddfd1132be797ed9025b8977067f68a9016051286041ed4ee3c38d3225136cd,PodSandboxId:6cc15e22a4c6ea6bfddd088767d080ae4f8dc0dc95bbbf793e0d9c05ab802627,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718020423442794498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fhndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f848e7-44f6-4ab1-bf94-3189733abca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7a55cea4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782d58441abcdd0791ae72b44e699f9f6a4c30867e4aec8eca2a0338dbaf33d0,PodSandboxId:72debdf12a31460f1dd1edbbb4834b7f471970978d402dc3360db0d240cfc374,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020404548079400,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af991281f76a9c4d496d9158234dfc48,},Annotations:map[string]string{io.kubernetes.container.hash: 29667b85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba6ee23d7a88b9d4aae2cad62cb70292ab5ff9a7f85aa6cef1aa90959382e9b,PodSandboxId:bb9cc9dfa0362795f02853309767ab44429a06bbf87b8887ee52eb4d7f379e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020404524877296,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7787f0f4433798238ba6c479ed8cbe,},Annotations:map[string]string{io.kubernetes.container.hash: 44495568,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a7b43ca72a0fe56bf21afcae51fd55480c85f73a08bd848fd2884f99005058,PodSandboxId:9d35d2e40c9b05e62daeb3ac27d37eaa125bbd4abd15f4321c57fa3cb327f4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020404512735021,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbed13fc899dffe5489a781ad246db8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d5466fc0761ffafa56f8b58377652ecea0499411a50a90195f70039ad5ab9b,PodSandboxId:4b4cb53ff65abad35f6a102515e7e9a5c01be3e536f533a75a32ca4259afbb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020404424265060,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dba9ebe3c0b0b9d3dac53b9b8aedb7,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86f1d49e-201b-49e6-8b81-5e41901b954b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.054224619Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1cd8b32c-a424-4289-b365-3903c9a4eb2a name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.054320949Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1cd8b32c-a424-4289-b365-3903c9a4eb2a name=/runtime.v1.RuntimeService/Version
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.055672472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eddee704-c28f-4a98-a1b7-e8e96baea2b4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.056022957Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718020968055995986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eddee704-c28f-4a98-a1b7-e8e96baea2b4 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.056569241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a4f5a0c-e376-4584-87c3-8b96f3ea05b3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.056622575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a4f5a0c-e376-4584-87c3-8b96f3ea05b3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:02:48 no-preload-298179 crio[723]: time="2024-06-10 12:02:48.056812091Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:683e59037f5932468d2405bbd3fd52d77ce5ad62e1759892e8d937191e057437,PodSandboxId:deee4653c7072b7c169a0567c8244abb526ea2a11a4098043cf947cc0401f0f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020424861177652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 783f523c-4c21-4ae0-bc18-9c391e7342b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1f746830,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2e411906f14885e5c4a5b5164f742d7283e55c02bc310f8571b5ab021ce97e,PodSandboxId:66fc0cde87620c4b46299ad7ab86b3173f3a617d0a268e2cd36b76691ca25c43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424338795209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f622z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16cb4de3-afa9-4e45-bc85-e51273973808,},Annotations:map[string]string{io.kubernetes.container.hash: 7a12602a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a066b8539f611e071a9acfaeb6cc35563e3b55b5b270b17884aa8c2432be6a3,PodSandboxId:714f0a77adfbb94747c437f5a2a45f6ffee84236ddbe67f02786e139d992252e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424386935854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9mqrm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
69d670-dffa-4526-8117-0b44df04554a,},Annotations:map[string]string{io.kubernetes.container.hash: c5356ac7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fddfd1132be797ed9025b8977067f68a9016051286041ed4ee3c38d3225136cd,PodSandboxId:6cc15e22a4c6ea6bfddd088767d080ae4f8dc0dc95bbbf793e0d9c05ab802627,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718020423442794498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fhndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f848e7-44f6-4ab1-bf94-3189733abca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7a55cea4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782d58441abcdd0791ae72b44e699f9f6a4c30867e4aec8eca2a0338dbaf33d0,PodSandboxId:72debdf12a31460f1dd1edbbb4834b7f471970978d402dc3360db0d240cfc374,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020404548079400,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af991281f76a9c4d496d9158234dfc48,},Annotations:map[string]string{io.kubernetes.container.hash: 29667b85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba6ee23d7a88b9d4aae2cad62cb70292ab5ff9a7f85aa6cef1aa90959382e9b,PodSandboxId:bb9cc9dfa0362795f02853309767ab44429a06bbf87b8887ee52eb4d7f379e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020404524877296,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7787f0f4433798238ba6c479ed8cbe,},Annotations:map[string]string{io.kubernetes.container.hash: 44495568,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a7b43ca72a0fe56bf21afcae51fd55480c85f73a08bd848fd2884f99005058,PodSandboxId:9d35d2e40c9b05e62daeb3ac27d37eaa125bbd4abd15f4321c57fa3cb327f4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020404512735021,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbed13fc899dffe5489a781ad246db8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d5466fc0761ffafa56f8b58377652ecea0499411a50a90195f70039ad5ab9b,PodSandboxId:4b4cb53ff65abad35f6a102515e7e9a5c01be3e536f533a75a32ca4259afbb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020404424265060,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dba9ebe3c0b0b9d3dac53b9b8aedb7,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a4f5a0c-e376-4584-87c3-8b96f3ea05b3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	683e59037f593       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   deee4653c7072       storage-provisioner
	5a066b8539f61       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   714f0a77adfbb       coredns-7db6d8ff4d-9mqrm
	7b2e411906f14       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   66fc0cde87620       coredns-7db6d8ff4d-f622z
	fddfd1132be79       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                0                   6cc15e22a4c6e       kube-proxy-fhndh
	782d58441abcd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   72debdf12a314       etcd-no-preload-298179
	cba6ee23d7a88       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   9 minutes ago       Running             kube-apiserver            2                   bb9cc9dfa0362       kube-apiserver-no-preload-298179
	07a7b43ca72a0       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   9d35d2e40c9b0       kube-scheduler-no-preload-298179
	20d5466fc0761       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   9 minutes ago       Running             kube-controller-manager   2                   4b4cb53ff65ab       kube-controller-manager-no-preload-298179
	
	
	==> coredns [5a066b8539f611e071a9acfaeb6cc35563e3b55b5b270b17884aa8c2432be6a3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [7b2e411906f14885e5c4a5b5164f742d7283e55c02bc310f8571b5ab021ce97e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-298179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-298179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=no-preload-298179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T11_53_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:53:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-298179
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:02:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 11:58:55 +0000   Mon, 10 Jun 2024 11:53:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 11:58:55 +0000   Mon, 10 Jun 2024 11:53:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 11:58:55 +0000   Mon, 10 Jun 2024 11:53:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 11:58:55 +0000   Mon, 10 Jun 2024 11:53:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    no-preload-298179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 29602cfed4bd43bfa2d60195b75916d2
	  System UUID:                29602cfe-d4bd-43bf-a2d6-0195b75916d2
	  Boot ID:                    d0445246-42cf-4286-a8eb-214294939a5d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-9mqrm                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-7db6d8ff4d-f622z                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-no-preload-298179                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-298179             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-no-preload-298179    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-fhndh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-no-preload-298179             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-569cc877fc-jp7dr              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s (x2 over 9m19s)  kubelet          Node no-preload-298179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s (x2 over 9m19s)  kubelet          Node no-preload-298179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s (x2 over 9m19s)  kubelet          Node no-preload-298179 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m7s                   node-controller  Node no-preload-298179 event: Registered Node no-preload-298179 in Controller
	
	
	==> dmesg <==
	[  +0.042998] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.613186] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.883261] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.560534] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.011373] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.056246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061587] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.162937] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.141107] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.294949] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[ +16.054144] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.059586] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.600492] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +3.872291] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.411535] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.614635] kauditd_printk_skb: 35 callbacks suppressed
	[Jun10 11:53] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.520447] systemd-fstab-generator[4036]: Ignoring "noauto" option for root device
	[  +6.053381] systemd-fstab-generator[4360]: Ignoring "noauto" option for root device
	[  +0.071960] kauditd_printk_skb: 53 callbacks suppressed
	[ +13.753071] systemd-fstab-generator[4574]: Ignoring "noauto" option for root device
	[  +0.107073] kauditd_printk_skb: 12 callbacks suppressed
	[Jun10 11:54] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [782d58441abcdd0791ae72b44e699f9f6a4c30867e4aec8eca2a0338dbaf33d0] <==
	{"level":"info","ts":"2024-06-10T11:53:24.967542Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.48:2380"}
	{"level":"info","ts":"2024-06-10T11:53:24.967574Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.48:2380"}
	{"level":"info","ts":"2024-06-10T11:53:25.369124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-06-10T11:53:25.369244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-06-10T11:53:25.369297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 received MsgPreVoteResp from 7a50af7ffd27cbe1 at term 1"}
	{"level":"info","ts":"2024-06-10T11:53:25.369334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became candidate at term 2"}
	{"level":"info","ts":"2024-06-10T11:53:25.369358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 received MsgVoteResp from 7a50af7ffd27cbe1 at term 2"}
	{"level":"info","ts":"2024-06-10T11:53:25.369386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became leader at term 2"}
	{"level":"info","ts":"2024-06-10T11:53:25.369415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7a50af7ffd27cbe1 elected leader 7a50af7ffd27cbe1 at term 2"}
	{"level":"info","ts":"2024-06-10T11:53:25.373967Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7a50af7ffd27cbe1","local-member-attributes":"{Name:no-preload-298179 ClientURLs:[https://192.168.39.48:2379]}","request-path":"/0/members/7a50af7ffd27cbe1/attributes","cluster-id":"59383b002ca7add2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T11:53:25.374098Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:53:25.374183Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:53:25.379088Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T11:53:25.379127Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T11:53:25.374213Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:53:25.381282Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"59383b002ca7add2","local-member-id":"7a50af7ffd27cbe1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:53:25.381378Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:53:25.381422Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:53:25.382681Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.48:2379"}
	{"level":"info","ts":"2024-06-10T11:53:25.384793Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-06-10T11:57:04.035715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.347392ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14691181797706206169 > lease_revoke:<id:4be19001fef68b8a>","response":"size:27"}
	{"level":"info","ts":"2024-06-10T11:57:04.03644Z","caller":"traceutil/trace.go:171","msg":"trace[196209639] linearizableReadLoop","detail":"{readStateIndex:667; appliedIndex:666; }","duration":"188.134143ms","start":"2024-06-10T11:57:03.848262Z","end":"2024-06-10T11:57:04.036396Z","steps":["trace[196209639] 'read index received'  (duration: 23.523µs)","trace[196209639] 'applied index is now lower than readState.Index'  (duration: 188.109146ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T11:57:04.036753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.431877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T11:57:04.036823Z","caller":"traceutil/trace.go:171","msg":"trace[63460844] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:609; }","duration":"188.569051ms","start":"2024-06-10T11:57:03.848235Z","end":"2024-06-10T11:57:04.036804Z","steps":["trace[63460844] 'agreement among raft nodes before linearized reading'  (duration: 188.424802ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:57:04.198602Z","caller":"traceutil/trace.go:171","msg":"trace[1682754429] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"116.932566ms","start":"2024-06-10T11:57:04.081632Z","end":"2024-06-10T11:57:04.198564Z","steps":["trace[1682754429] 'process raft request'  (duration: 116.766038ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:02:48 up 14 min,  0 users,  load average: 0.14, 0.14, 0.14
	Linux no-preload-298179 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cba6ee23d7a88b9d4aae2cad62cb70292ab5ff9a7f85aa6cef1aa90959382e9b] <==
	I0610 11:56:45.394896       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 11:58:26.773472       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 11:58:26.773875       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0610 11:58:27.775171       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 11:58:27.775253       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 11:58:27.775265       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 11:58:27.775378       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 11:58:27.775494       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 11:58:27.776704       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 11:59:27.776222       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 11:59:27.776424       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 11:59:27.776470       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 11:59:27.777692       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 11:59:27.777831       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 11:59:27.777858       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:01:27.777603       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:01:27.778080       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:01:27.778122       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:01:27.778162       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:01:27.778263       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:01:27.780092       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [20d5466fc0761ffafa56f8b58377652ecea0499411a50a90195f70039ad5ab9b] <==
	I0610 11:57:12.702470       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 11:57:42.056595       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 11:57:42.710283       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 11:58:12.062872       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 11:58:12.719702       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 11:58:42.069091       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 11:58:42.729944       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 11:59:12.075507       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 11:59:12.738425       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0610 11:59:28.613230       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="286.245µs"
	E0610 11:59:42.081431       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 11:59:42.604473       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="193.063µs"
	I0610 11:59:42.746001       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:00:12.087858       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:00:12.754135       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:00:42.093721       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:00:42.763951       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:01:12.100480       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:01:12.772690       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:01:42.109102       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:01:42.781364       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:02:12.115117       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:02:12.789195       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:02:42.120346       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:02:42.798493       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fddfd1132be797ed9025b8977067f68a9016051286041ed4ee3c38d3225136cd] <==
	I0610 11:53:43.780576       1 server_linux.go:69] "Using iptables proxy"
	I0610 11:53:43.812122       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	I0610 11:53:43.911550       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 11:53:43.911603       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 11:53:43.911620       1 server_linux.go:165] "Using iptables Proxier"
	I0610 11:53:43.919783       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 11:53:43.920017       1 server.go:872] "Version info" version="v1.30.1"
	I0610 11:53:43.920087       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:53:43.921355       1 config.go:192] "Starting service config controller"
	I0610 11:53:43.921389       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 11:53:43.921415       1 config.go:101] "Starting endpoint slice config controller"
	I0610 11:53:43.921422       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 11:53:43.925806       1 config.go:319] "Starting node config controller"
	I0610 11:53:43.925831       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 11:53:44.022192       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 11:53:44.022256       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:53:44.025953       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [07a7b43ca72a0fe56bf21afcae51fd55480c85f73a08bd848fd2884f99005058] <==
	W0610 11:53:26.800795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 11:53:26.803177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 11:53:27.632491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 11:53:27.632631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 11:53:27.697631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 11:53:27.697697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 11:53:27.869455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 11:53:27.869500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 11:53:27.883792       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 11:53:27.883836       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 11:53:27.944357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 11:53:27.944432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 11:53:28.032736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 11:53:28.032888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 11:53:28.046125       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 11:53:28.046212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 11:53:28.048391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 11:53:28.048457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 11:53:28.070832       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 11:53:28.070881       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 11:53:28.090115       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 11:53:28.090225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 11:53:28.141238       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 11:53:28.141275       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 11:53:30.762643       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 12:00:29 no-preload-298179 kubelet[4367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:00:29 no-preload-298179 kubelet[4367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:00:29 no-preload-298179 kubelet[4367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:00:29 no-preload-298179 kubelet[4367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:00:31 no-preload-298179 kubelet[4367]: E0610 12:00:31.590099    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:00:43 no-preload-298179 kubelet[4367]: E0610 12:00:43.589509    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:00:54 no-preload-298179 kubelet[4367]: E0610 12:00:54.589578    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:01:07 no-preload-298179 kubelet[4367]: E0610 12:01:07.590150    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:01:19 no-preload-298179 kubelet[4367]: E0610 12:01:19.590522    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:01:29 no-preload-298179 kubelet[4367]: E0610 12:01:29.611213    4367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:01:29 no-preload-298179 kubelet[4367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:01:29 no-preload-298179 kubelet[4367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:01:29 no-preload-298179 kubelet[4367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:01:29 no-preload-298179 kubelet[4367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:01:31 no-preload-298179 kubelet[4367]: E0610 12:01:31.588983    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:01:45 no-preload-298179 kubelet[4367]: E0610 12:01:45.589879    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:02:00 no-preload-298179 kubelet[4367]: E0610 12:02:00.589395    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:02:15 no-preload-298179 kubelet[4367]: E0610 12:02:15.590809    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:02:26 no-preload-298179 kubelet[4367]: E0610 12:02:26.589382    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:02:29 no-preload-298179 kubelet[4367]: E0610 12:02:29.608410    4367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:02:29 no-preload-298179 kubelet[4367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:02:29 no-preload-298179 kubelet[4367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:02:29 no-preload-298179 kubelet[4367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:02:29 no-preload-298179 kubelet[4367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:02:38 no-preload-298179 kubelet[4367]: E0610 12:02:38.589780    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	
	
	==> storage-provisioner [683e59037f5932468d2405bbd3fd52d77ce5ad62e1759892e8d937191e057437] <==
	I0610 11:53:44.993994       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 11:53:45.015618       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 11:53:45.015676       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 11:53:45.035914       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 11:53:45.036716       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4915957e-92d9-4a4d-9131-fdfe380bf55e", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-298179_7d49b23d-b859-4601-8012-2b681d11b5b3 became leader
	I0610 11:53:45.036788       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-298179_7d49b23d-b859-4601-8012-2b681d11b5b3!
	I0610 11:53:45.137331       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-298179_7d49b23d-b859-4601-8012-2b681d11b5b3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298179 -n no-preload-298179
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-298179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jp7dr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-298179 describe pod metrics-server-569cc877fc-jp7dr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-298179 describe pod metrics-server-569cc877fc-jp7dr: exit status 1 (69.724623ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jp7dr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-298179 describe pod metrics-server-569cc877fc-jp7dr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
E0610 11:56:57.914130   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
E0610 11:57:15.502915   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
E0610 11:59:12.453562   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
E0610 12:01:57.913766   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
E0610 12:04:12.453485   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
E0610 12:05:00.962382   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166693 -n old-k8s-version-166693
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 2 (232.493398ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-166693" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 2 (233.330261ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-166693 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-324836                              | cert-expiration-324836       | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-036579 | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	|         | disable-driver-mounts-036579                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-832735            | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC | 10 Jun 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	| addons  | enable metrics-server -p no-preload-298179             | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-832735                 | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-166693        | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-298179                  | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:49 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-166693             | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281114  | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC | 10 Jun 24 11:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC |                     |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281114       | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC | 10 Jun 24 12:02 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 11:51:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 11:51:53.675460   60146 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:51:53.675676   60146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:51:53.675684   60146 out.go:304] Setting ErrFile to fd 2...
	I0610 11:51:53.675688   60146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:51:53.675848   60146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:51:53.676386   60146 out.go:298] Setting JSON to false
	I0610 11:51:53.677403   60146 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5655,"bootTime":1718014659,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:51:53.677465   60146 start.go:139] virtualization: kvm guest
	I0610 11:51:53.679851   60146 out.go:177] * [default-k8s-diff-port-281114] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:51:53.681209   60146 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:51:53.682492   60146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:51:53.681162   60146 notify.go:220] Checking for updates...
	I0610 11:51:53.683939   60146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:51:53.685202   60146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:51:53.686363   60146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:51:53.687770   60146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:51:53.689668   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:51:53.690093   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.690167   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.705134   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0610 11:51:53.705647   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.706289   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.706314   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.706603   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.706788   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.707058   60146 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:51:53.707411   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.707451   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.722927   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0610 11:51:53.723433   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.723927   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.723953   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.724482   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.724651   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.763209   60146 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 11:51:53.764436   60146 start.go:297] selected driver: kvm2
	I0610 11:51:53.764446   60146 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:51:53.764537   60146 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:51:53.765172   60146 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:51:53.765257   60146 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 11:51:53.782641   60146 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 11:51:53.783044   60146 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:51:53.783099   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:51:53.783109   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:51:53.783152   60146 start.go:340] cluster config:
	{Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:51:53.783254   60146 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:51:53.786018   60146 out.go:177] * Starting "default-k8s-diff-port-281114" primary control-plane node in "default-k8s-diff-port-281114" cluster
	I0610 11:51:53.787303   60146 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:51:53.787344   60146 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 11:51:53.787357   60146 cache.go:56] Caching tarball of preloaded images
	I0610 11:51:53.787439   60146 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:51:53.787455   60146 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 11:51:53.787569   60146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/config.json ...
	I0610 11:51:53.787799   60146 start.go:360] acquireMachinesLock for default-k8s-diff-port-281114: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:51:53.787855   60146 start.go:364] duration metric: took 30.27µs to acquireMachinesLock for "default-k8s-diff-port-281114"
	I0610 11:51:53.787875   60146 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:51:53.787881   60146 fix.go:54] fixHost starting: 
	I0610 11:51:53.788131   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.788165   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.805744   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0610 11:51:53.806279   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.806909   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.806936   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.807346   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.807532   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.807718   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 11:51:53.809469   60146 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281114: state=Running err=<nil>
	W0610 11:51:53.809507   60146 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:51:53.811518   60146 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-281114" VM ...
	I0610 11:51:50.691535   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:52.691588   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:54.692007   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:54.248038   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:54.261302   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:54.261375   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:54.293194   57945 cri.go:89] found id: ""
	I0610 11:51:54.293228   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.293240   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:54.293247   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:54.293307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:54.326656   57945 cri.go:89] found id: ""
	I0610 11:51:54.326687   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.326699   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:54.326707   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:54.326764   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:54.359330   57945 cri.go:89] found id: ""
	I0610 11:51:54.359365   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.359378   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:54.359386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:54.359450   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:54.391520   57945 cri.go:89] found id: ""
	I0610 11:51:54.391549   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.391558   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:54.391565   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:54.391642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:54.426803   57945 cri.go:89] found id: ""
	I0610 11:51:54.426840   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.426850   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:54.426860   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:54.426936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:54.462618   57945 cri.go:89] found id: ""
	I0610 11:51:54.462645   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.462653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:54.462659   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:54.462728   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:54.494599   57945 cri.go:89] found id: ""
	I0610 11:51:54.494631   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.494642   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:54.494650   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:54.494701   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:54.528236   57945 cri.go:89] found id: ""
	I0610 11:51:54.528265   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.528280   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:54.528290   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:54.528305   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:54.579562   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:54.579604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:54.592871   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:54.592899   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:54.661928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:54.661950   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:54.661984   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:54.741578   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:54.741611   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:53.939312   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:55.940181   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:53.812752   60146 machine.go:94] provisionDockerMachine start ...
	I0610 11:51:53.812779   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.813001   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:51:53.815580   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:51:53.815981   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:47:50 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:51:53.816013   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:51:53.816111   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:51:53.816288   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:51:53.816435   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:51:53.816577   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:51:53.816743   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:51:53.817141   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:51:53.817157   60146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:51:56.705435   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:51:56.692515   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:59.192511   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:57.283397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:57.296631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:57.296704   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:57.328185   57945 cri.go:89] found id: ""
	I0610 11:51:57.328217   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.328228   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:57.328237   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:57.328302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:57.360137   57945 cri.go:89] found id: ""
	I0610 11:51:57.360163   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.360173   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:57.360188   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:57.360244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:57.395638   57945 cri.go:89] found id: ""
	I0610 11:51:57.395680   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.395691   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:57.395700   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:57.395765   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:57.429024   57945 cri.go:89] found id: ""
	I0610 11:51:57.429051   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.429062   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:57.429070   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:57.429132   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:57.461726   57945 cri.go:89] found id: ""
	I0610 11:51:57.461757   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.461767   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:57.461773   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:57.461838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:57.495055   57945 cri.go:89] found id: ""
	I0610 11:51:57.495078   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.495086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:57.495092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:57.495138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:57.526495   57945 cri.go:89] found id: ""
	I0610 11:51:57.526521   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.526530   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:57.526536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:57.526598   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:57.559160   57945 cri.go:89] found id: ""
	I0610 11:51:57.559181   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.559189   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:57.559197   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:57.559212   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:57.593801   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:57.593827   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:57.641074   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:57.641106   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:57.654097   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:57.654124   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:57.726137   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:57.726160   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:57.726176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:00.302303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:00.314500   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:00.314560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:00.345865   57945 cri.go:89] found id: ""
	I0610 11:52:00.345889   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.345897   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:00.345902   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:00.345946   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:00.377383   57945 cri.go:89] found id: ""
	I0610 11:52:00.377405   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.377412   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:00.377417   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:00.377482   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:00.408667   57945 cri.go:89] found id: ""
	I0610 11:52:00.408694   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.408701   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:00.408706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:00.408755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:00.444349   57945 cri.go:89] found id: ""
	I0610 11:52:00.444379   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.444390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:00.444397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:00.444455   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:00.477886   57945 cri.go:89] found id: ""
	I0610 11:52:00.477910   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.477918   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:00.477924   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:00.477982   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:00.508996   57945 cri.go:89] found id: ""
	I0610 11:52:00.509023   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.509030   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:00.509036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:00.509097   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:00.541548   57945 cri.go:89] found id: ""
	I0610 11:52:00.541572   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.541580   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:00.541585   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:00.541642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:00.574507   57945 cri.go:89] found id: ""
	I0610 11:52:00.574534   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.574541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:00.574550   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:00.574565   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:00.610838   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:00.610862   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:00.661155   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:00.661197   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:00.674122   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:00.674154   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:00.745943   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:00.745976   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:00.745993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:58.439245   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:00.441145   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:59.777253   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:01.691833   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:04.193279   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:03.325365   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:03.337955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:03.338042   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:03.370767   57945 cri.go:89] found id: ""
	I0610 11:52:03.370798   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.370810   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:03.370818   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:03.370903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:03.402587   57945 cri.go:89] found id: ""
	I0610 11:52:03.402616   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.402623   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:03.402628   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:03.402684   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:03.436751   57945 cri.go:89] found id: ""
	I0610 11:52:03.436778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.436788   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:03.436795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:03.436854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:03.467745   57945 cri.go:89] found id: ""
	I0610 11:52:03.467778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.467788   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:03.467798   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:03.467865   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:03.499321   57945 cri.go:89] found id: ""
	I0610 11:52:03.499347   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.499355   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:03.499361   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:03.499419   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:03.534209   57945 cri.go:89] found id: ""
	I0610 11:52:03.534242   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.534253   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:03.534261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:03.534318   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:03.567837   57945 cri.go:89] found id: ""
	I0610 11:52:03.567871   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.567882   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:03.567889   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:03.567954   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:03.604223   57945 cri.go:89] found id: ""
	I0610 11:52:03.604249   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.604258   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:03.604266   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:03.604280   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:03.659716   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:03.659751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:03.673389   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:03.673425   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:03.746076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:03.746104   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:03.746118   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:03.825803   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:03.825837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.362151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:06.375320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:06.375394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:06.409805   57945 cri.go:89] found id: ""
	I0610 11:52:06.409840   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.409851   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:06.409859   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:06.409914   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:06.447126   57945 cri.go:89] found id: ""
	I0610 11:52:06.447157   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.447167   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:06.447174   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:06.447237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:06.479443   57945 cri.go:89] found id: ""
	I0610 11:52:06.479472   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.479483   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:06.479489   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:06.479546   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:06.511107   57945 cri.go:89] found id: ""
	I0610 11:52:06.511137   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.511148   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:06.511163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:06.511223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:06.542727   57945 cri.go:89] found id: ""
	I0610 11:52:06.542753   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.542761   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:06.542767   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:06.542812   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:06.582141   57945 cri.go:89] found id: ""
	I0610 11:52:06.582166   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.582174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:06.582180   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:06.582239   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:06.615203   57945 cri.go:89] found id: ""
	I0610 11:52:06.615230   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.615240   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:06.615248   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:06.615314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:06.650286   57945 cri.go:89] found id: ""
	I0610 11:52:06.650310   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.650317   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:06.650326   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:06.650338   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:06.721601   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:06.721631   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:06.721646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:06.794645   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:06.794679   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.830598   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:06.830628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:06.880740   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:06.880786   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:02.939105   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:04.939366   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:07.439715   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:05.861224   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:06.691130   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:09.191608   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:09.394202   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:09.409822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:09.409898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:09.451573   57945 cri.go:89] found id: ""
	I0610 11:52:09.451597   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.451605   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:09.451611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:09.451663   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:09.491039   57945 cri.go:89] found id: ""
	I0610 11:52:09.491069   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.491080   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:09.491087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:09.491147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:09.522023   57945 cri.go:89] found id: ""
	I0610 11:52:09.522050   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.522058   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:09.522063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:09.522108   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:09.554014   57945 cri.go:89] found id: ""
	I0610 11:52:09.554040   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.554048   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:09.554057   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:09.554127   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:09.586285   57945 cri.go:89] found id: ""
	I0610 11:52:09.586318   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.586328   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:09.586336   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:09.586396   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:09.618362   57945 cri.go:89] found id: ""
	I0610 11:52:09.618391   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.618401   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:09.618408   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:09.618465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:09.651067   57945 cri.go:89] found id: ""
	I0610 11:52:09.651097   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.651108   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:09.651116   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:09.651174   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:09.682764   57945 cri.go:89] found id: ""
	I0610 11:52:09.682792   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.682799   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:09.682807   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:09.682819   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:09.755071   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:09.755096   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:09.755109   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:09.833635   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:09.833672   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:09.869744   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:09.869777   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:09.924045   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:09.924079   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:09.440296   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:11.939025   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:08.929184   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:11.691213   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:13.693439   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:12.438029   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:12.452003   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:12.452070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:12.485680   57945 cri.go:89] found id: ""
	I0610 11:52:12.485711   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.485719   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:12.485725   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:12.485773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:12.519200   57945 cri.go:89] found id: ""
	I0610 11:52:12.519227   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.519238   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:12.519245   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:12.519317   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:12.553154   57945 cri.go:89] found id: ""
	I0610 11:52:12.553179   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.553185   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:12.553191   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:12.553237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:12.584499   57945 cri.go:89] found id: ""
	I0610 11:52:12.584543   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.584555   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:12.584564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:12.584619   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:12.619051   57945 cri.go:89] found id: ""
	I0610 11:52:12.619079   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.619094   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:12.619102   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:12.619165   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:12.653652   57945 cri.go:89] found id: ""
	I0610 11:52:12.653690   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.653702   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:12.653710   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:12.653773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:12.685887   57945 cri.go:89] found id: ""
	I0610 11:52:12.685919   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.685930   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:12.685938   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:12.685997   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:12.719534   57945 cri.go:89] found id: ""
	I0610 11:52:12.719567   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.719578   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:12.719591   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:12.719603   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:12.770689   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:12.770725   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:12.783574   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:12.783604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:12.855492   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:12.855518   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:12.855529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:12.928993   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:12.929037   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:15.487670   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:15.501367   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:15.501437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:15.534205   57945 cri.go:89] found id: ""
	I0610 11:52:15.534248   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.534256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:15.534262   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:15.534315   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:15.570972   57945 cri.go:89] found id: ""
	I0610 11:52:15.571001   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.571008   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:15.571013   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:15.571073   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:15.604233   57945 cri.go:89] found id: ""
	I0610 11:52:15.604258   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.604267   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:15.604273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:15.604328   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:15.637119   57945 cri.go:89] found id: ""
	I0610 11:52:15.637150   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.637159   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:15.637167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:15.637226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:15.670548   57945 cri.go:89] found id: ""
	I0610 11:52:15.670572   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.670580   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:15.670586   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:15.670644   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:15.706374   57945 cri.go:89] found id: ""
	I0610 11:52:15.706398   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.706406   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:15.706412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:15.706457   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:15.742828   57945 cri.go:89] found id: ""
	I0610 11:52:15.742852   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.742859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:15.742865   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:15.742910   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:15.773783   57945 cri.go:89] found id: ""
	I0610 11:52:15.773811   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.773818   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:15.773825   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:15.773835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:15.828725   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:15.828764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:15.842653   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:15.842682   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:15.919771   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:15.919794   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:15.919809   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:15.994439   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:15.994478   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:13.943213   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:16.439647   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:15.009211   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:18.081244   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:16.191615   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:18.191760   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:18.532040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:18.544800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:18.544893   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:18.579148   57945 cri.go:89] found id: ""
	I0610 11:52:18.579172   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.579180   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:18.579186   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:18.579236   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:18.613005   57945 cri.go:89] found id: ""
	I0610 11:52:18.613028   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.613035   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:18.613042   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:18.613094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:18.648843   57945 cri.go:89] found id: ""
	I0610 11:52:18.648870   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.648878   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:18.648883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:18.648939   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:18.678943   57945 cri.go:89] found id: ""
	I0610 11:52:18.678974   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.679014   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:18.679022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:18.679082   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:18.728485   57945 cri.go:89] found id: ""
	I0610 11:52:18.728516   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.728527   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:18.728535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:18.728605   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:18.764320   57945 cri.go:89] found id: ""
	I0610 11:52:18.764352   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.764363   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:18.764370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:18.764431   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:18.797326   57945 cri.go:89] found id: ""
	I0610 11:52:18.797358   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.797369   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:18.797377   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:18.797440   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:18.832517   57945 cri.go:89] found id: ""
	I0610 11:52:18.832552   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.832563   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:18.832574   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:18.832588   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:18.845158   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:18.845192   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:18.915928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:18.915959   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:18.915974   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:18.990583   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:18.990625   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:19.029044   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:19.029069   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.582973   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:21.596373   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:21.596453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:21.633497   57945 cri.go:89] found id: ""
	I0610 11:52:21.633528   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.633538   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:21.633546   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:21.633631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:21.663999   57945 cri.go:89] found id: ""
	I0610 11:52:21.664055   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.664069   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:21.664078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:21.664138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:21.698105   57945 cri.go:89] found id: ""
	I0610 11:52:21.698136   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.698147   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:21.698155   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:21.698213   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:21.730036   57945 cri.go:89] found id: ""
	I0610 11:52:21.730061   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.730068   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:21.730074   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:21.730119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:21.764484   57945 cri.go:89] found id: ""
	I0610 11:52:21.764507   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.764515   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:21.764520   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:21.764575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:21.797366   57945 cri.go:89] found id: ""
	I0610 11:52:21.797397   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.797408   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:21.797415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:21.797478   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:21.832991   57945 cri.go:89] found id: ""
	I0610 11:52:21.833023   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.833030   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:21.833035   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:21.833081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:21.868859   57945 cri.go:89] found id: ""
	I0610 11:52:21.868890   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.868899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:21.868924   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:21.868937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.918976   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:21.919013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:21.934602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:21.934629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:22.002888   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:22.002909   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:22.002920   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:22.082894   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:22.082941   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:18.439853   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:20.942040   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:20.692398   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:23.191532   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:24.620683   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:24.634200   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:24.634280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:24.667181   57945 cri.go:89] found id: ""
	I0610 11:52:24.667209   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.667217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:24.667222   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:24.667277   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:24.702114   57945 cri.go:89] found id: ""
	I0610 11:52:24.702142   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.702151   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:24.702158   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:24.702220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:24.734464   57945 cri.go:89] found id: ""
	I0610 11:52:24.734488   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.734497   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:24.734502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:24.734565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:24.767074   57945 cri.go:89] found id: ""
	I0610 11:52:24.767124   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.767132   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:24.767138   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:24.767210   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:24.800328   57945 cri.go:89] found id: ""
	I0610 11:52:24.800358   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.800369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:24.800376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:24.800442   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:24.837785   57945 cri.go:89] found id: ""
	I0610 11:52:24.837814   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.837822   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:24.837828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:24.837878   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:24.874886   57945 cri.go:89] found id: ""
	I0610 11:52:24.874910   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.874917   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:24.874923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:24.874968   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:24.912191   57945 cri.go:89] found id: ""
	I0610 11:52:24.912217   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.912235   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:24.912247   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:24.912265   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:24.968229   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:24.968262   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:24.981018   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:24.981048   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:25.049879   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:25.049907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:25.049922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:25.135103   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:25.135156   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:23.440293   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:25.939540   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.201186   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:25.691136   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.691669   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.687667   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:27.700418   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:27.700486   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:27.733712   57945 cri.go:89] found id: ""
	I0610 11:52:27.733740   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.733749   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:27.733754   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:27.733839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:27.774063   57945 cri.go:89] found id: ""
	I0610 11:52:27.774089   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.774100   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:27.774108   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:27.774169   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:27.813906   57945 cri.go:89] found id: ""
	I0610 11:52:27.813945   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.813956   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:27.813963   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:27.814031   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:27.845877   57945 cri.go:89] found id: ""
	I0610 11:52:27.845901   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.845909   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:27.845915   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:27.845961   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:27.880094   57945 cri.go:89] found id: ""
	I0610 11:52:27.880139   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.880148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:27.880153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:27.880206   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:27.914308   57945 cri.go:89] found id: ""
	I0610 11:52:27.914332   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.914342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:27.914355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:27.914420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:27.949386   57945 cri.go:89] found id: ""
	I0610 11:52:27.949412   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.949423   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:27.949430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:27.949490   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:27.983901   57945 cri.go:89] found id: ""
	I0610 11:52:27.983927   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.983938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:27.983948   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:27.983963   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:28.032820   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:28.032853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:28.046306   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:28.046332   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:28.120614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:28.120642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:28.120657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:28.202182   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:28.202217   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:30.741274   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:30.754276   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:30.754358   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:30.789142   57945 cri.go:89] found id: ""
	I0610 11:52:30.789174   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.789185   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:30.789193   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:30.789255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:30.822319   57945 cri.go:89] found id: ""
	I0610 11:52:30.822350   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.822362   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:30.822369   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:30.822428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:30.853166   57945 cri.go:89] found id: ""
	I0610 11:52:30.853192   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.853199   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:30.853204   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:30.853271   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:30.892290   57945 cri.go:89] found id: ""
	I0610 11:52:30.892320   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.892331   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:30.892339   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:30.892401   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:30.938603   57945 cri.go:89] found id: ""
	I0610 11:52:30.938629   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.938639   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:30.938646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:30.938703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:30.994532   57945 cri.go:89] found id: ""
	I0610 11:52:30.994567   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.994583   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:30.994589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:30.994649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:31.041818   57945 cri.go:89] found id: ""
	I0610 11:52:31.041847   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.041859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:31.041867   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:31.041923   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:31.079897   57945 cri.go:89] found id: ""
	I0610 11:52:31.079927   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.079938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:31.079951   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:31.079967   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:31.092291   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:31.092321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:31.163921   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:31.163943   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:31.163955   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:31.242247   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:31.242287   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:31.281257   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:31.281286   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:27.940743   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:30.440529   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:30.273256   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:30.192386   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:32.192470   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:34.691408   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:33.837783   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:33.851085   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:33.851164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:33.885285   57945 cri.go:89] found id: ""
	I0610 11:52:33.885314   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.885324   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:33.885332   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:33.885391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:33.924958   57945 cri.go:89] found id: ""
	I0610 11:52:33.924996   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.925006   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:33.925022   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:33.925083   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:33.958563   57945 cri.go:89] found id: ""
	I0610 11:52:33.958589   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.958598   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:33.958606   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:33.958665   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:33.991575   57945 cri.go:89] found id: ""
	I0610 11:52:33.991606   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.991616   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:33.991624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:33.991693   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:34.029700   57945 cri.go:89] found id: ""
	I0610 11:52:34.029729   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.029740   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:34.029748   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:34.029805   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:34.068148   57945 cri.go:89] found id: ""
	I0610 11:52:34.068183   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.068194   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:34.068201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:34.068275   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:34.100735   57945 cri.go:89] found id: ""
	I0610 11:52:34.100760   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.100767   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:34.100772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:34.100817   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:34.132898   57945 cri.go:89] found id: ""
	I0610 11:52:34.132927   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.132937   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:34.132958   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:34.132972   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:34.184690   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:34.184723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:34.199604   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:34.199641   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:34.270744   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:34.270763   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:34.270775   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:34.352291   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:34.352334   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:36.894188   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:36.914098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:36.914158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:36.957378   57945 cri.go:89] found id: ""
	I0610 11:52:36.957408   57945 logs.go:276] 0 containers: []
	W0610 11:52:36.957419   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:36.957427   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:36.957498   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:37.003576   57945 cri.go:89] found id: ""
	I0610 11:52:37.003602   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.003611   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:37.003618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:37.003677   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:37.040221   57945 cri.go:89] found id: ""
	I0610 11:52:37.040245   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.040253   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:37.040259   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:37.040307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:37.078151   57945 cri.go:89] found id: ""
	I0610 11:52:37.078185   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.078195   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:37.078202   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:37.078261   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:37.117446   57945 cri.go:89] found id: ""
	I0610 11:52:37.117468   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.117476   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:37.117482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:37.117548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:37.155320   57945 cri.go:89] found id: ""
	I0610 11:52:37.155344   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.155356   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:37.155364   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:37.155414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:37.192194   57945 cri.go:89] found id: ""
	I0610 11:52:37.192221   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.192230   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:37.192238   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:37.192303   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:37.225567   57945 cri.go:89] found id: ""
	I0610 11:52:37.225594   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.225605   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:37.225616   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:37.225632   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:37.240139   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:37.240164   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:52:32.940571   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:34.940672   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:37.440898   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:36.353199   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:36.697419   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:39.190952   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	W0610 11:52:37.307754   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:37.307784   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:37.307801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:37.385929   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:37.385964   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:37.424991   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:37.425029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:39.974839   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:39.988788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:39.988858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:40.025922   57945 cri.go:89] found id: ""
	I0610 11:52:40.025947   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.025954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:40.025967   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:40.026026   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:40.062043   57945 cri.go:89] found id: ""
	I0610 11:52:40.062076   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.062085   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:40.062094   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:40.062158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:40.095441   57945 cri.go:89] found id: ""
	I0610 11:52:40.095465   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.095472   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:40.095478   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:40.095529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:40.127633   57945 cri.go:89] found id: ""
	I0610 11:52:40.127662   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.127672   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:40.127680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:40.127740   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:40.161232   57945 cri.go:89] found id: ""
	I0610 11:52:40.161257   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.161267   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:40.161274   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:40.161334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:40.194491   57945 cri.go:89] found id: ""
	I0610 11:52:40.194521   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.194529   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:40.194535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:40.194583   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:40.226376   57945 cri.go:89] found id: ""
	I0610 11:52:40.226404   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.226411   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:40.226416   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:40.226465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:40.257938   57945 cri.go:89] found id: ""
	I0610 11:52:40.257968   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.257978   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:40.257988   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:40.258004   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:40.327247   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:40.327276   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:40.327291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:40.404231   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:40.404263   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:40.441554   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:40.441585   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:40.491952   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:40.491987   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:39.939538   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:41.939639   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:39.425159   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:41.191808   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:43.695646   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:43.006217   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:43.019113   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:43.019187   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:43.053010   57945 cri.go:89] found id: ""
	I0610 11:52:43.053035   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.053045   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:43.053051   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:43.053115   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:43.086118   57945 cri.go:89] found id: ""
	I0610 11:52:43.086145   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.086156   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:43.086171   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:43.086235   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:43.117892   57945 cri.go:89] found id: ""
	I0610 11:52:43.117919   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.117929   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:43.117937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:43.118011   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:43.149751   57945 cri.go:89] found id: ""
	I0610 11:52:43.149777   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.149787   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:43.149795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:43.149855   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:43.184215   57945 cri.go:89] found id: ""
	I0610 11:52:43.184250   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.184261   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:43.184268   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:43.184332   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:43.219758   57945 cri.go:89] found id: ""
	I0610 11:52:43.219787   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.219797   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:43.219805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:43.219868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:43.250698   57945 cri.go:89] found id: ""
	I0610 11:52:43.250728   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.250738   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:43.250746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:43.250803   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:43.286526   57945 cri.go:89] found id: ""
	I0610 11:52:43.286556   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.286566   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:43.286576   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:43.286589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:43.362219   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:43.362255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:43.398332   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:43.398366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:43.449468   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:43.449502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:43.462346   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:43.462381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:43.539578   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:46.039720   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:46.052749   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:46.052821   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:46.093110   57945 cri.go:89] found id: ""
	I0610 11:52:46.093139   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.093147   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:46.093152   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:46.093219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:46.130885   57945 cri.go:89] found id: ""
	I0610 11:52:46.130916   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.130924   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:46.130930   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:46.130977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:46.167471   57945 cri.go:89] found id: ""
	I0610 11:52:46.167507   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.167524   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:46.167531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:46.167593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:46.204776   57945 cri.go:89] found id: ""
	I0610 11:52:46.204799   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.204807   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:46.204812   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:46.204860   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:46.244826   57945 cri.go:89] found id: ""
	I0610 11:52:46.244859   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.244869   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:46.244876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:46.244942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:46.281757   57945 cri.go:89] found id: ""
	I0610 11:52:46.281783   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.281791   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:46.281797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:46.281844   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:46.319517   57945 cri.go:89] found id: ""
	I0610 11:52:46.319546   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.319558   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:46.319566   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:46.319636   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:46.355806   57945 cri.go:89] found id: ""
	I0610 11:52:46.355835   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.355846   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:46.355858   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:46.355872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:46.433087   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:46.433131   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:46.468792   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:46.468829   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:46.517931   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:46.517969   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:46.530892   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:46.530935   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:46.592585   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:43.940733   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:46.440354   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:45.505281   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:48.577214   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:46.191520   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:48.691214   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:49.093662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:49.106539   57945 kubeadm.go:591] duration metric: took 4m4.396325615s to restartPrimaryControlPlane
	W0610 11:52:49.106625   57945 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 11:52:49.106658   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:52:48.441202   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:50.433923   57572 pod_ready.go:81] duration metric: took 4m0.000312516s for pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace to be "Ready" ...
	E0610 11:52:50.433960   57572 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0610 11:52:50.433982   57572 pod_ready.go:38] duration metric: took 4m5.113212783s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:52:50.434008   57572 kubeadm.go:591] duration metric: took 4m16.406085019s to restartPrimaryControlPlane
	W0610 11:52:50.434091   57572 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 11:52:50.434128   57572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:52:53.503059   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.396374472s)
	I0610 11:52:53.503148   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:52:53.518235   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:52:53.529298   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:52:53.539273   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:52:53.539297   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:52:53.539341   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:52:53.548285   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:52:53.548354   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:52:53.557659   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:52:53.569253   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:52:53.569330   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:52:53.579689   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.589800   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:52:53.589865   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.600324   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:52:53.610542   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:52:53.610612   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:52:53.620144   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:52:53.687195   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:52:53.687302   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:52:53.851035   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:52:53.851178   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:52:53.851305   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:52:54.037503   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:52:54.039523   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:52:54.039621   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:52:54.039718   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:52:54.039850   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:52:54.039959   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:52:54.040055   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:52:54.040135   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:52:54.040233   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:52:54.040506   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:52:54.040892   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:52:54.041344   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:52:54.041411   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:52:54.041507   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:52:54.151486   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:52:54.389555   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:52:54.507653   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:52:54.690886   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:52:54.708542   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:52:54.712251   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:52:54.712504   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:52:54.872755   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:52:50.691517   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:53.191418   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:54.874801   57945 out.go:204]   - Booting up control plane ...
	I0610 11:52:54.874978   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:52:54.883224   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:52:54.885032   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:52:54.886182   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:52:54.891030   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:52:54.661214   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:57.729160   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:55.691987   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:58.192548   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:00.692060   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:03.192673   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:03.809217   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:06.885176   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:05.692004   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:07.692545   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:12.961318   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:10.191064   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:12.192258   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:14.691564   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:16.033278   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:16.691670   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:18.691801   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:21.778313   57572 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.344150357s)
	I0610 11:53:21.778398   57572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:21.793960   57572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:53:21.803952   57572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:53:21.813685   57572 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:53:21.813709   57572 kubeadm.go:156] found existing configuration files:
	
	I0610 11:53:21.813758   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:53:21.823957   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:53:21.824027   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:53:21.833125   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:53:21.841834   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:53:21.841893   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:53:21.850999   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:53:21.859858   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:53:21.859920   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:53:21.869076   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:53:21.877079   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:53:21.877141   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:53:21.887614   57572 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:53:21.941932   57572 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 11:53:21.941987   57572 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:53:22.084118   57572 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:53:22.084219   57572 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:53:22.084310   57572 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:53:22.287685   57572 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:53:22.289568   57572 out.go:204]   - Generating certificates and keys ...
	I0610 11:53:22.289674   57572 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:53:22.289779   57572 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:53:22.289917   57572 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:53:22.290032   57572 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:53:22.290144   57572 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:53:22.290234   57572 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:53:22.290339   57572 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:53:22.290439   57572 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:53:22.290558   57572 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:53:22.290674   57572 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:53:22.290732   57572 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:53:22.290819   57572 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:53:22.354674   57572 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:53:22.573948   57572 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 11:53:22.805694   57572 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:53:22.914740   57572 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:53:23.218887   57572 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:53:23.221479   57572 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:53:23.223937   57572 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:53:22.113312   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:20.692241   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:23.192124   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:23.695912   56769 pod_ready.go:81] duration metric: took 4m0.01073501s for pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace to be "Ready" ...
	E0610 11:53:23.695944   56769 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0610 11:53:23.695954   56769 pod_ready.go:38] duration metric: took 4m2.412094982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:23.695972   56769 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:53:23.696001   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:23.696058   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:23.758822   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:23.758850   56769 cri.go:89] found id: ""
	I0610 11:53:23.758860   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:23.758921   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.765128   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:23.765198   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:23.798454   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:23.798483   56769 cri.go:89] found id: ""
	I0610 11:53:23.798494   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:23.798560   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.802985   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:23.803051   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:23.855781   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:23.855810   56769 cri.go:89] found id: ""
	I0610 11:53:23.855819   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:23.855873   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.860285   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:23.860363   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:23.901849   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:23.901868   56769 cri.go:89] found id: ""
	I0610 11:53:23.901878   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:23.901935   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.906116   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:23.906183   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:23.941376   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:23.941396   56769 cri.go:89] found id: ""
	I0610 11:53:23.941405   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:23.941463   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.947379   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:23.947450   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:23.984733   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:23.984757   56769 cri.go:89] found id: ""
	I0610 11:53:23.984766   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:23.984839   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.988701   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:23.988752   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:24.024067   56769 cri.go:89] found id: ""
	I0610 11:53:24.024094   56769 logs.go:276] 0 containers: []
	W0610 11:53:24.024103   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:24.024110   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:24.024170   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:24.058220   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:24.058250   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:24.058255   56769 cri.go:89] found id: ""
	I0610 11:53:24.058263   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:24.058321   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:24.062072   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:24.065706   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:24.065723   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:24.104622   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:24.104652   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:24.142432   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:24.142457   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:24.670328   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:24.670375   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:24.726557   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:24.726592   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:24.769111   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:24.769150   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:24.811199   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:24.811246   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:24.876489   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:24.876547   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:23.225694   57572 out.go:204]   - Booting up control plane ...
	I0610 11:53:23.225803   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:53:23.225898   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:53:23.226004   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:53:23.245138   57572 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:53:23.246060   57572 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:53:23.246121   57572 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:53:23.375562   57572 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 11:53:23.375689   57572 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 11:53:23.877472   57572 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.888048ms
	I0610 11:53:23.877560   57572 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 11:53:25.185274   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:28.879976   57572 kubeadm.go:309] [api-check] The API server is healthy after 5.002334008s
	I0610 11:53:28.902382   57572 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 11:53:28.924552   57572 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 11:53:28.956686   57572 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 11:53:28.956958   57572 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-298179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 11:53:28.971883   57572 kubeadm.go:309] [bootstrap-token] Using token: zdzp8m.ttyzgfzbws24vbk8
	I0610 11:53:24.916641   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:24.916824   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:24.980737   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:24.980779   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:24.998139   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:24.998163   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:25.113809   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:25.113839   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:25.168214   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:25.168254   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:27.708296   56769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:53:27.730996   56769 api_server.go:72] duration metric: took 4m14.155149231s to wait for apiserver process to appear ...
	I0610 11:53:27.731021   56769 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:53:27.731057   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:27.731116   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:27.767385   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:27.767411   56769 cri.go:89] found id: ""
	I0610 11:53:27.767420   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:27.767465   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.771646   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:27.771723   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:27.806969   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:27.806996   56769 cri.go:89] found id: ""
	I0610 11:53:27.807005   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:27.807060   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.811580   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:27.811655   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:27.850853   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:27.850879   56769 cri.go:89] found id: ""
	I0610 11:53:27.850888   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:27.850947   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.855284   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:27.855347   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:27.901228   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:27.901256   56769 cri.go:89] found id: ""
	I0610 11:53:27.901266   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:27.901322   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.905361   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:27.905428   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:27.943162   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:27.943187   56769 cri.go:89] found id: ""
	I0610 11:53:27.943197   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:27.943251   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.951934   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:27.952015   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:27.996288   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:27.996316   56769 cri.go:89] found id: ""
	I0610 11:53:27.996325   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:27.996381   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.000307   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:28.000378   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:28.036978   56769 cri.go:89] found id: ""
	I0610 11:53:28.037016   56769 logs.go:276] 0 containers: []
	W0610 11:53:28.037026   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:28.037033   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:28.037091   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:28.078338   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:28.078363   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:28.078368   56769 cri.go:89] found id: ""
	I0610 11:53:28.078377   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:28.078433   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.082899   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.087382   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:28.087416   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:28.123014   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:28.123051   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:28.186128   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:28.186160   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:28.314495   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:28.314539   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:28.358953   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:28.358981   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:28.394280   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:28.394306   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:28.450138   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:28.450172   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:28.851268   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:28.851307   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:28.909176   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:28.909202   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:28.927322   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:28.927359   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:28.983941   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:28.983971   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:29.023327   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:29.023352   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:29.063624   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:29.063655   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:28.973316   57572 out.go:204]   - Configuring RBAC rules ...
	I0610 11:53:28.973437   57572 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 11:53:28.979726   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 11:53:28.989075   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 11:53:28.999678   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 11:53:29.005717   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 11:53:29.014439   57572 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 11:53:29.292088   57572 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 11:53:29.734969   57572 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 11:53:30.288723   57572 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 11:53:30.289824   57572 kubeadm.go:309] 
	I0610 11:53:30.289918   57572 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 11:53:30.289930   57572 kubeadm.go:309] 
	I0610 11:53:30.290061   57572 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 11:53:30.290078   57572 kubeadm.go:309] 
	I0610 11:53:30.290107   57572 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 11:53:30.290191   57572 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 11:53:30.290268   57572 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 11:53:30.290316   57572 kubeadm.go:309] 
	I0610 11:53:30.290402   57572 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 11:53:30.290412   57572 kubeadm.go:309] 
	I0610 11:53:30.290481   57572 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 11:53:30.290494   57572 kubeadm.go:309] 
	I0610 11:53:30.290539   57572 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 11:53:30.290602   57572 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 11:53:30.290659   57572 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 11:53:30.290666   57572 kubeadm.go:309] 
	I0610 11:53:30.290749   57572 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 11:53:30.290816   57572 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 11:53:30.290823   57572 kubeadm.go:309] 
	I0610 11:53:30.290901   57572 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zdzp8m.ttyzgfzbws24vbk8 \
	I0610 11:53:30.291011   57572 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 11:53:30.291032   57572 kubeadm.go:309] 	--control-plane 
	I0610 11:53:30.291038   57572 kubeadm.go:309] 
	I0610 11:53:30.291113   57572 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 11:53:30.291120   57572 kubeadm.go:309] 
	I0610 11:53:30.291230   57572 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zdzp8m.ttyzgfzbws24vbk8 \
	I0610 11:53:30.291370   57572 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 11:53:30.291895   57572 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:53:30.291925   57572 cni.go:84] Creating CNI manager for ""
	I0610 11:53:30.291936   57572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:53:30.294227   57572 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 11:53:30.295470   57572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 11:53:30.306011   57572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 11:53:30.322832   57572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 11:53:30.322890   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:30.322960   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-298179 minikube.k8s.io/updated_at=2024_06_10T11_53_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=no-preload-298179 minikube.k8s.io/primary=true
	I0610 11:53:30.486915   57572 ops.go:34] apiserver oom_adj: -16
	I0610 11:53:30.487320   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:30.988103   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.488094   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.988314   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:32.487603   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.265182   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:31.597111   56769 api_server.go:253] Checking apiserver healthz at https://192.168.61.19:8443/healthz ...
	I0610 11:53:31.601589   56769 api_server.go:279] https://192.168.61.19:8443/healthz returned 200:
	ok
	I0610 11:53:31.602609   56769 api_server.go:141] control plane version: v1.30.1
	I0610 11:53:31.602631   56769 api_server.go:131] duration metric: took 3.871604169s to wait for apiserver health ...
	I0610 11:53:31.602639   56769 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:53:31.602663   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:31.602716   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:31.650102   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:31.650130   56769 cri.go:89] found id: ""
	I0610 11:53:31.650139   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:31.650197   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.654234   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:31.654299   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:31.690704   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:31.690736   56769 cri.go:89] found id: ""
	I0610 11:53:31.690750   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:31.690810   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.695139   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:31.695209   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:31.732593   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:31.732614   56769 cri.go:89] found id: ""
	I0610 11:53:31.732621   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:31.732667   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.737201   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:31.737277   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:31.774177   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:31.774219   56769 cri.go:89] found id: ""
	I0610 11:53:31.774239   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:31.774300   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.778617   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:31.778695   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:31.816633   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:31.816657   56769 cri.go:89] found id: ""
	I0610 11:53:31.816665   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:31.816715   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.820846   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:31.820928   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:31.857021   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:31.857052   56769 cri.go:89] found id: ""
	I0610 11:53:31.857062   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:31.857127   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.862825   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:31.862888   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:31.903792   56769 cri.go:89] found id: ""
	I0610 11:53:31.903817   56769 logs.go:276] 0 containers: []
	W0610 11:53:31.903825   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:31.903837   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:31.903885   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:31.942392   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:31.942414   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:31.942419   56769 cri.go:89] found id: ""
	I0610 11:53:31.942428   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:31.942481   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.949047   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.953590   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:31.953625   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:31.991926   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:31.991954   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:32.040857   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:32.040894   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:32.432680   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:32.432731   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:32.474819   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:32.474849   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:32.530152   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:32.530189   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:32.547698   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:32.547735   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:32.598580   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:32.598634   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:32.643864   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:32.643900   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:32.679085   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:32.679118   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:32.714247   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:32.714279   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:32.818508   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:32.818551   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:32.862390   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:32.862424   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:35.408169   56769 system_pods.go:59] 8 kube-system pods found
	I0610 11:53:35.408198   56769 system_pods.go:61] "coredns-7db6d8ff4d-7dlzb" [4b2618cd-b48c-44bd-a07d-4fe4585a14fa] Running
	I0610 11:53:35.408203   56769 system_pods.go:61] "etcd-embed-certs-832735" [4b7d413d-9a2a-4677-b279-5a6d39904679] Running
	I0610 11:53:35.408208   56769 system_pods.go:61] "kube-apiserver-embed-certs-832735" [7e11e03e-7b15-4e9b-8f9a-9a46d7aadd7e] Running
	I0610 11:53:35.408211   56769 system_pods.go:61] "kube-controller-manager-embed-certs-832735" [75aa996d-fdf3-4c32-b25d-03c7582b3502] Running
	I0610 11:53:35.408215   56769 system_pods.go:61] "kube-proxy-b7x2p" [fe1cd055-691f-46b1-ada7-7dded31d2308] Running
	I0610 11:53:35.408218   56769 system_pods.go:61] "kube-scheduler-embed-certs-832735" [b7a7fcfb-7ce9-4470-9052-79bc13029408] Running
	I0610 11:53:35.408223   56769 system_pods.go:61] "metrics-server-569cc877fc-5zg8j" [e979b4b0-356d-479d-990f-d9e6e46a1a9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:35.408233   56769 system_pods.go:61] "storage-provisioner" [47aa143e-3545-492d-ac93-e62f0076e0f4] Running
	I0610 11:53:35.408241   56769 system_pods.go:74] duration metric: took 3.805596332s to wait for pod list to return data ...
	I0610 11:53:35.408248   56769 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:53:35.410634   56769 default_sa.go:45] found service account: "default"
	I0610 11:53:35.410659   56769 default_sa.go:55] duration metric: took 2.405735ms for default service account to be created ...
	I0610 11:53:35.410667   56769 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:53:35.415849   56769 system_pods.go:86] 8 kube-system pods found
	I0610 11:53:35.415871   56769 system_pods.go:89] "coredns-7db6d8ff4d-7dlzb" [4b2618cd-b48c-44bd-a07d-4fe4585a14fa] Running
	I0610 11:53:35.415876   56769 system_pods.go:89] "etcd-embed-certs-832735" [4b7d413d-9a2a-4677-b279-5a6d39904679] Running
	I0610 11:53:35.415881   56769 system_pods.go:89] "kube-apiserver-embed-certs-832735" [7e11e03e-7b15-4e9b-8f9a-9a46d7aadd7e] Running
	I0610 11:53:35.415886   56769 system_pods.go:89] "kube-controller-manager-embed-certs-832735" [75aa996d-fdf3-4c32-b25d-03c7582b3502] Running
	I0610 11:53:35.415890   56769 system_pods.go:89] "kube-proxy-b7x2p" [fe1cd055-691f-46b1-ada7-7dded31d2308] Running
	I0610 11:53:35.415894   56769 system_pods.go:89] "kube-scheduler-embed-certs-832735" [b7a7fcfb-7ce9-4470-9052-79bc13029408] Running
	I0610 11:53:35.415900   56769 system_pods.go:89] "metrics-server-569cc877fc-5zg8j" [e979b4b0-356d-479d-990f-d9e6e46a1a9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:35.415906   56769 system_pods.go:89] "storage-provisioner" [47aa143e-3545-492d-ac93-e62f0076e0f4] Running
	I0610 11:53:35.415913   56769 system_pods.go:126] duration metric: took 5.241641ms to wait for k8s-apps to be running ...
	I0610 11:53:35.415919   56769 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:53:35.415957   56769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:35.431179   56769 system_svc.go:56] duration metric: took 15.252123ms WaitForService to wait for kubelet
	I0610 11:53:35.431209   56769 kubeadm.go:576] duration metric: took 4m21.85536785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:53:35.431233   56769 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:53:35.433918   56769 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:53:35.433941   56769 node_conditions.go:123] node cpu capacity is 2
	I0610 11:53:35.433955   56769 node_conditions.go:105] duration metric: took 2.718538ms to run NodePressure ...
	I0610 11:53:35.433966   56769 start.go:240] waiting for startup goroutines ...
	I0610 11:53:35.433973   56769 start.go:245] waiting for cluster config update ...
	I0610 11:53:35.433982   56769 start.go:254] writing updated cluster config ...
	I0610 11:53:35.434234   56769 ssh_runner.go:195] Run: rm -f paused
	I0610 11:53:35.483552   56769 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:53:35.485459   56769 out.go:177] * Done! kubectl is now configured to use "embed-certs-832735" cluster and "default" namespace by default
	I0610 11:53:34.892890   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:53:34.893019   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:34.893195   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:32.987749   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:33.488008   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:33.988419   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.488002   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.988349   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:35.487347   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:35.987479   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:36.487972   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:36.987442   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:37.488069   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.337236   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:39.893441   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:39.893640   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:37.987751   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:38.488215   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:38.987955   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:39.487394   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:39.987431   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:40.488304   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:40.987779   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:41.488123   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:41.987438   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:42.487799   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:42.987548   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:43.084050   57572 kubeadm.go:1107] duration metric: took 12.761214532s to wait for elevateKubeSystemPrivileges
	W0610 11:53:43.084095   57572 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 11:53:43.084109   57572 kubeadm.go:393] duration metric: took 5m9.100565129s to StartCluster
	I0610 11:53:43.084128   57572 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:53:43.084215   57572 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:53:43.085889   57572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:53:43.086151   57572 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 11:53:43.087762   57572 out.go:177] * Verifying Kubernetes components...
	I0610 11:53:43.086215   57572 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 11:53:43.087796   57572 addons.go:69] Setting storage-provisioner=true in profile "no-preload-298179"
	I0610 11:53:43.087800   57572 addons.go:69] Setting default-storageclass=true in profile "no-preload-298179"
	I0610 11:53:43.087819   57572 addons.go:234] Setting addon storage-provisioner=true in "no-preload-298179"
	W0610 11:53:43.087825   57572 addons.go:243] addon storage-provisioner should already be in state true
	I0610 11:53:43.087832   57572 addons.go:69] Setting metrics-server=true in profile "no-preload-298179"
	I0610 11:53:43.087847   57572 addons.go:234] Setting addon metrics-server=true in "no-preload-298179"
	W0610 11:53:43.087856   57572 addons.go:243] addon metrics-server should already be in state true
	I0610 11:53:43.087826   57572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-298179"
	I0610 11:53:43.087878   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.089535   57572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:53:43.087856   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.086356   57572 config.go:182] Loaded profile config "no-preload-298179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:53:43.088180   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.088182   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.089687   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.089713   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.089869   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.089895   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.104587   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0610 11:53:43.104609   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0610 11:53:43.104586   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34031
	I0610 11:53:43.105501   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105566   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105508   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105983   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.105997   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106134   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.106153   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106160   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.106184   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106350   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106526   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106568   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106692   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.106890   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.106918   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.107118   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.107141   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.109645   57572 addons.go:234] Setting addon default-storageclass=true in "no-preload-298179"
	W0610 11:53:43.109664   57572 addons.go:243] addon default-storageclass should already be in state true
	I0610 11:53:43.109692   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.109914   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.109939   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.123209   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0610 11:53:43.123703   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.124011   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0610 11:53:43.124351   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.124372   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.124393   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.124777   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.124847   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.124869   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.124998   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.125208   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.125941   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.125994   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.126208   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35175
	I0610 11:53:43.126555   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.126915   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.127030   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.127038   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.129007   57572 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0610 11:53:43.127369   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.130329   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 11:53:43.130349   57572 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 11:53:43.130372   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.130501   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.132699   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.134359   57572 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:53:40.417218   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:43.489341   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:43.135801   57572 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:53:43.135818   57572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 11:53:43.135837   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.134045   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.135918   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.135948   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.134772   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.136159   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.136318   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.136621   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.139217   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.139636   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.139658   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.140091   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.140568   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.140865   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.141293   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.145179   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0610 11:53:43.145813   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.146336   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.146358   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.146675   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.146987   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.148747   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.149026   57572 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 11:53:43.149042   57572 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 11:53:43.149064   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.152048   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.152550   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.152572   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.152780   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.153021   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.153256   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.153406   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.293079   57572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:53:43.323699   57572 node_ready.go:35] waiting up to 6m0s for node "no-preload-298179" to be "Ready" ...
	I0610 11:53:43.331922   57572 node_ready.go:49] node "no-preload-298179" has status "Ready":"True"
	I0610 11:53:43.331946   57572 node_ready.go:38] duration metric: took 8.212434ms for node "no-preload-298179" to be "Ready" ...
	I0610 11:53:43.331956   57572 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:43.338721   57572 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:43.399175   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 11:53:43.399196   57572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0610 11:53:43.432920   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 11:53:43.432986   57572 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 11:53:43.453982   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:53:43.457146   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 11:53:43.500871   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 11:53:43.500900   57572 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 11:53:43.601303   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 11:53:44.376916   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.376992   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377083   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377105   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377298   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377377   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.377383   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.377301   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377394   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377403   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377405   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.377414   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377421   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377608   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377634   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.379039   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.379090   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.379054   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.397328   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.397354   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.397626   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.397644   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.880094   57572 pod_ready.go:92] pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.880129   57572 pod_ready.go:81] duration metric: took 1.541384627s for pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.880149   57572 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.901625   57572 pod_ready.go:92] pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.901649   57572 pod_ready.go:81] duration metric: took 21.492207ms for pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.901658   57572 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.907530   57572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.306184796s)
	I0610 11:53:44.907587   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.907603   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.907929   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.907991   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.908005   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.908015   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.908262   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.908301   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.908305   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.908315   57572 addons.go:475] Verifying addon metrics-server=true in "no-preload-298179"
	I0610 11:53:44.910622   57572 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0610 11:53:44.911848   57572 addons.go:510] duration metric: took 1.825630817s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0610 11:53:44.922534   57572 pod_ready.go:92] pod "etcd-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.922562   57572 pod_ready.go:81] duration metric: took 20.896794ms for pod "etcd-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.922576   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.947545   57572 pod_ready.go:92] pod "kube-apiserver-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.947569   57572 pod_ready.go:81] duration metric: took 24.984822ms for pod "kube-apiserver-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.947578   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.956216   57572 pod_ready.go:92] pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.956240   57572 pod_ready.go:81] duration metric: took 8.656291ms for pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.956256   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fhndh" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.326936   57572 pod_ready.go:92] pod "kube-proxy-fhndh" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:45.326977   57572 pod_ready.go:81] duration metric: took 370.713967ms for pod "kube-proxy-fhndh" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.326987   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.733487   57572 pod_ready.go:92] pod "kube-scheduler-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:45.733514   57572 pod_ready.go:81] duration metric: took 406.51925ms for pod "kube-scheduler-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.733525   57572 pod_ready.go:38] duration metric: took 2.401559014s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:45.733544   57572 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:53:45.733612   57572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:53:45.754814   57572 api_server.go:72] duration metric: took 2.668628419s to wait for apiserver process to appear ...
	I0610 11:53:45.754838   57572 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:53:45.754867   57572 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I0610 11:53:45.763742   57572 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I0610 11:53:45.765314   57572 api_server.go:141] control plane version: v1.30.1
	I0610 11:53:45.765345   57572 api_server.go:131] duration metric: took 10.498726ms to wait for apiserver health ...
	I0610 11:53:45.765356   57572 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:53:45.930764   57572 system_pods.go:59] 9 kube-system pods found
	I0610 11:53:45.930792   57572 system_pods.go:61] "coredns-7db6d8ff4d-9mqrm" [6269d670-dffa-4526-8117-0b44df04554a] Running
	I0610 11:53:45.930796   57572 system_pods.go:61] "coredns-7db6d8ff4d-f622z" [16cb4de3-afa9-4e45-bc85-e51273973808] Running
	I0610 11:53:45.930800   57572 system_pods.go:61] "etcd-no-preload-298179" [088f1950-04c4-49e0-b3e2-fe8b5f398a08] Running
	I0610 11:53:45.930806   57572 system_pods.go:61] "kube-apiserver-no-preload-298179" [11bad142-25ff-4aa9-9d9e-4b7cbb053bdd] Running
	I0610 11:53:45.930810   57572 system_pods.go:61] "kube-controller-manager-no-preload-298179" [ac29a4d9-6e9c-44fd-bb39-477255b94d0c] Running
	I0610 11:53:45.930814   57572 system_pods.go:61] "kube-proxy-fhndh" [50f848e7-44f6-4ab1-bf94-3189733abca2] Running
	I0610 11:53:45.930818   57572 system_pods.go:61] "kube-scheduler-no-preload-298179" [8569c375-b9bd-4a46-91ea-c6372056e45d] Running
	I0610 11:53:45.930826   57572 system_pods.go:61] "metrics-server-569cc877fc-jp7dr" [21136ae9-40d8-4857-aca5-47e3fa3b7e9c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:45.930831   57572 system_pods.go:61] "storage-provisioner" [783f523c-4c21-4ae0-bc18-9c391e7342b0] Running
	I0610 11:53:45.930843   57572 system_pods.go:74] duration metric: took 165.479385ms to wait for pod list to return data ...
	I0610 11:53:45.930855   57572 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:53:46.127109   57572 default_sa.go:45] found service account: "default"
	I0610 11:53:46.127145   57572 default_sa.go:55] duration metric: took 196.279685ms for default service account to be created ...
	I0610 11:53:46.127154   57572 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:53:46.330560   57572 system_pods.go:86] 9 kube-system pods found
	I0610 11:53:46.330587   57572 system_pods.go:89] "coredns-7db6d8ff4d-9mqrm" [6269d670-dffa-4526-8117-0b44df04554a] Running
	I0610 11:53:46.330592   57572 system_pods.go:89] "coredns-7db6d8ff4d-f622z" [16cb4de3-afa9-4e45-bc85-e51273973808] Running
	I0610 11:53:46.330597   57572 system_pods.go:89] "etcd-no-preload-298179" [088f1950-04c4-49e0-b3e2-fe8b5f398a08] Running
	I0610 11:53:46.330601   57572 system_pods.go:89] "kube-apiserver-no-preload-298179" [11bad142-25ff-4aa9-9d9e-4b7cbb053bdd] Running
	I0610 11:53:46.330605   57572 system_pods.go:89] "kube-controller-manager-no-preload-298179" [ac29a4d9-6e9c-44fd-bb39-477255b94d0c] Running
	I0610 11:53:46.330608   57572 system_pods.go:89] "kube-proxy-fhndh" [50f848e7-44f6-4ab1-bf94-3189733abca2] Running
	I0610 11:53:46.330612   57572 system_pods.go:89] "kube-scheduler-no-preload-298179" [8569c375-b9bd-4a46-91ea-c6372056e45d] Running
	I0610 11:53:46.330619   57572 system_pods.go:89] "metrics-server-569cc877fc-jp7dr" [21136ae9-40d8-4857-aca5-47e3fa3b7e9c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:46.330623   57572 system_pods.go:89] "storage-provisioner" [783f523c-4c21-4ae0-bc18-9c391e7342b0] Running
	I0610 11:53:46.330631   57572 system_pods.go:126] duration metric: took 203.472984ms to wait for k8s-apps to be running ...
	I0610 11:53:46.330640   57572 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:53:46.330681   57572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:46.345084   57572 system_svc.go:56] duration metric: took 14.432966ms WaitForService to wait for kubelet
	I0610 11:53:46.345113   57572 kubeadm.go:576] duration metric: took 3.258932349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:53:46.345131   57572 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:53:46.528236   57572 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:53:46.528269   57572 node_conditions.go:123] node cpu capacity is 2
	I0610 11:53:46.528278   57572 node_conditions.go:105] duration metric: took 183.142711ms to run NodePressure ...
	I0610 11:53:46.528288   57572 start.go:240] waiting for startup goroutines ...
	I0610 11:53:46.528294   57572 start.go:245] waiting for cluster config update ...
	I0610 11:53:46.528303   57572 start.go:254] writing updated cluster config ...
	I0610 11:53:46.528561   57572 ssh_runner.go:195] Run: rm -f paused
	I0610 11:53:46.576348   57572 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:53:46.578434   57572 out.go:177] * Done! kubectl is now configured to use "no-preload-298179" cluster and "default" namespace by default
	I0610 11:53:49.894176   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:49.894368   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:49.573292   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:52.641233   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:58.721260   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:01.793270   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:07.873263   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:09.895012   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:09.895413   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:10.945237   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:17.025183   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:20.097196   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:26.177217   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:29.249267   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:35.329193   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:38.401234   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:44.481254   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:47.553200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:49.896623   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:49.896849   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:49.896868   57945 kubeadm.go:309] 
	I0610 11:54:49.896922   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:54:49.897030   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:54:49.897053   57945 kubeadm.go:309] 
	I0610 11:54:49.897121   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:54:49.897157   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:54:49.897308   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:54:49.897322   57945 kubeadm.go:309] 
	I0610 11:54:49.897493   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:54:49.897553   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:54:49.897612   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:54:49.897623   57945 kubeadm.go:309] 
	I0610 11:54:49.897755   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:54:49.897866   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:54:49.897876   57945 kubeadm.go:309] 
	I0610 11:54:49.898032   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:54:49.898139   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:54:49.898253   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:54:49.898357   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:54:49.898365   57945 kubeadm.go:309] 
	I0610 11:54:49.899094   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:54:49.899208   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:54:49.899302   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0610 11:54:49.899441   57945 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0610 11:54:49.899502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:54:50.366528   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:54:50.380107   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:54:50.390067   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:54:50.390089   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:54:50.390132   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:54:50.399159   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:54:50.399222   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:54:50.409346   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:54:50.420402   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:54:50.420458   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:54:50.432874   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.444351   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:54:50.444430   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.458175   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:54:50.468538   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:54:50.468611   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:54:50.480033   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:54:50.543600   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:54:50.543653   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:54:50.682810   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:54:50.682970   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:54:50.683117   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:54:50.877761   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:54:50.879686   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:54:50.879788   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:54:50.879881   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:54:50.880010   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:54:50.880075   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:54:50.880145   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:54:50.880235   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:54:50.880334   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:54:50.880543   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:54:50.880654   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:54:50.880771   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:54:50.880835   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:54:50.880912   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:54:51.326073   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:54:51.537409   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:54:51.721400   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:54:51.884882   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:54:51.904377   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:54:51.906470   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:54:51.906560   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:54:52.065800   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:54:52.067657   57945 out.go:204]   - Booting up control plane ...
	I0610 11:54:52.067807   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:54:52.069012   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:54:52.070508   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:54:52.071669   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:54:52.074772   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:54:53.633176   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:56.705245   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:02.785227   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:05.857320   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:11.941172   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:15.009275   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:21.089235   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:24.161264   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:32.077145   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:55:32.077542   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:32.077740   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:30.241187   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:33.313200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:37.078114   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:37.078357   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:39.393317   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:42.465223   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:47.078706   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:47.078906   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:48.545281   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:51.617229   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:57.697600   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:00.769294   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:07.079053   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:07.079285   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:06.849261   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:09.925249   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:16.001299   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:19.077309   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:25.153200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:28.225172   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:31.226848   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:56:31.226888   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:31.227225   60146 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281114"
	I0610 11:56:31.227250   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:31.227458   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:31.229187   60146 machine.go:97] duration metric: took 4m37.416418256s to provisionDockerMachine
	I0610 11:56:31.229224   60146 fix.go:56] duration metric: took 4m37.441343871s for fixHost
	I0610 11:56:31.229230   60146 start.go:83] releasing machines lock for "default-k8s-diff-port-281114", held for 4m37.44136358s
	W0610 11:56:31.229245   60146 start.go:713] error starting host: provision: host is not running
	W0610 11:56:31.229314   60146 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0610 11:56:31.229325   60146 start.go:728] Will try again in 5 seconds ...
	I0610 11:56:36.230954   60146 start.go:360] acquireMachinesLock for default-k8s-diff-port-281114: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:56:36.231068   60146 start.go:364] duration metric: took 60.465µs to acquireMachinesLock for "default-k8s-diff-port-281114"
	I0610 11:56:36.231091   60146 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:56:36.231096   60146 fix.go:54] fixHost starting: 
	I0610 11:56:36.231372   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:56:36.231392   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:56:36.247286   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0610 11:56:36.247715   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:56:36.248272   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:56:36.248292   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:56:36.248585   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:56:36.248787   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:36.248939   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 11:56:36.250776   60146 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281114: state=Stopped err=<nil>
	I0610 11:56:36.250796   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	W0610 11:56:36.250950   60146 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:56:36.252942   60146 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-281114" ...
	I0610 11:56:36.254300   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Start
	I0610 11:56:36.254515   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring networks are active...
	I0610 11:56:36.255281   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring network default is active
	I0610 11:56:36.255626   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring network mk-default-k8s-diff-port-281114 is active
	I0610 11:56:36.256059   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Getting domain xml...
	I0610 11:56:36.256819   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Creating domain...
	I0610 11:56:37.521102   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting to get IP...
	I0610 11:56:37.522061   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.522494   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.522553   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:37.522473   61276 retry.go:31] will retry after 220.098219ms: waiting for machine to come up
	I0610 11:56:37.743932   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.744482   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.744513   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:37.744440   61276 retry.go:31] will retry after 292.471184ms: waiting for machine to come up
	I0610 11:56:38.038937   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.039497   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.039526   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:38.039454   61276 retry.go:31] will retry after 446.869846ms: waiting for machine to come up
	I0610 11:56:38.488091   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.488684   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.488708   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:38.488635   61276 retry.go:31] will retry after 607.787706ms: waiting for machine to come up
	I0610 11:56:39.098375   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.098845   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.098875   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:39.098795   61276 retry.go:31] will retry after 610.636143ms: waiting for machine to come up
	I0610 11:56:39.710692   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.711170   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.711198   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:39.711106   61276 retry.go:31] will retry after 598.132053ms: waiting for machine to come up
	I0610 11:56:40.310889   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:40.311397   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:40.311420   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:40.311328   61276 retry.go:31] will retry after 1.191704846s: waiting for machine to come up
	I0610 11:56:41.505131   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:41.505601   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:41.505631   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:41.505572   61276 retry.go:31] will retry after 937.081207ms: waiting for machine to come up
	I0610 11:56:42.444793   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:42.445368   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:42.445396   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:42.445338   61276 retry.go:31] will retry after 1.721662133s: waiting for machine to come up
	I0610 11:56:47.078993   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:47.079439   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:47.079463   57945 kubeadm.go:309] 
	I0610 11:56:47.079513   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:56:47.079597   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:56:47.079629   57945 kubeadm.go:309] 
	I0610 11:56:47.079678   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:56:47.079718   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:56:47.079865   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:56:47.079876   57945 kubeadm.go:309] 
	I0610 11:56:47.080014   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:56:47.080077   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:56:47.080132   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:56:47.080151   57945 kubeadm.go:309] 
	I0610 11:56:47.080280   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:56:47.080377   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:56:47.080389   57945 kubeadm.go:309] 
	I0610 11:56:47.080543   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:56:47.080663   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:56:47.080769   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:56:47.080862   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:56:47.080874   57945 kubeadm.go:309] 
	I0610 11:56:47.081877   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:56:47.082023   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:56:47.082137   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0610 11:56:47.082233   57945 kubeadm.go:393] duration metric: took 8m2.423366884s to StartCluster
	I0610 11:56:47.082273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:56:47.082325   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:56:47.130548   57945 cri.go:89] found id: ""
	I0610 11:56:47.130585   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.130596   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:56:47.130603   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:56:47.130673   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:56:47.170087   57945 cri.go:89] found id: ""
	I0610 11:56:47.170124   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.170136   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:56:47.170144   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:56:47.170219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:56:47.210394   57945 cri.go:89] found id: ""
	I0610 11:56:47.210430   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.210442   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:56:47.210450   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:56:47.210532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:56:47.246002   57945 cri.go:89] found id: ""
	I0610 11:56:47.246032   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.246043   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:56:47.246051   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:56:47.246119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:56:47.282333   57945 cri.go:89] found id: ""
	I0610 11:56:47.282361   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.282369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:56:47.282375   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:56:47.282432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:56:47.316205   57945 cri.go:89] found id: ""
	I0610 11:56:47.316241   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.316254   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:56:47.316262   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:56:47.316323   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:56:47.356012   57945 cri.go:89] found id: ""
	I0610 11:56:47.356047   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.356060   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:56:47.356069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:56:47.356140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:56:47.404624   57945 cri.go:89] found id: ""
	I0610 11:56:47.404655   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.404666   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:56:47.404678   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:56:47.404694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:56:47.475236   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:56:47.475282   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:56:47.493382   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:56:47.493418   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:56:47.589894   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:56:47.589918   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:56:47.589934   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:56:47.726080   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:56:47.726123   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0610 11:56:47.770399   57945 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0610 11:56:47.770451   57945 out.go:239] * 
	W0610 11:56:47.770532   57945 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.770558   57945 out.go:239] * 
	W0610 11:56:47.771459   57945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:56:47.775172   57945 out.go:177] 
	W0610 11:56:47.776444   57945 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.776509   57945 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0610 11:56:47.776545   57945 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0610 11:56:47.778306   57945 out.go:177] 
	I0610 11:56:44.168288   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:44.168809   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:44.168832   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:44.168762   61276 retry.go:31] will retry after 2.181806835s: waiting for machine to come up
	I0610 11:56:46.352210   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:46.352736   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:46.352764   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:46.352688   61276 retry.go:31] will retry after 2.388171324s: waiting for machine to come up
	I0610 11:56:48.744345   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:48.744853   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:48.744890   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:48.744815   61276 retry.go:31] will retry after 2.54250043s: waiting for machine to come up
	I0610 11:56:51.288816   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:51.289222   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:51.289252   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:51.289190   61276 retry.go:31] will retry after 4.525493142s: waiting for machine to come up
	I0610 11:56:55.819862   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.820393   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Found IP for machine: 192.168.50.222
	I0610 11:56:55.820416   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Reserving static IP address...
	I0610 11:56:55.820433   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has current primary IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.820941   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-281114", mac: "52:54:00:23:06:35", ip: "192.168.50.222"} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.820984   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Reserved static IP address: 192.168.50.222
	I0610 11:56:55.821000   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | skip adding static IP to network mk-default-k8s-diff-port-281114 - found existing host DHCP lease matching {name: "default-k8s-diff-port-281114", mac: "52:54:00:23:06:35", ip: "192.168.50.222"}
	I0610 11:56:55.821012   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Getting to WaitForSSH function...
	I0610 11:56:55.821028   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for SSH to be available...
	I0610 11:56:55.823149   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.823499   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.823530   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.823680   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Using SSH client type: external
	I0610 11:56:55.823717   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa (-rw-------)
	I0610 11:56:55.823750   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 11:56:55.823764   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | About to run SSH command:
	I0610 11:56:55.823778   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | exit 0
	I0610 11:56:55.949264   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | SSH cmd err, output: <nil>: 
	I0610 11:56:55.949623   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetConfigRaw
	I0610 11:56:55.950371   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:55.953239   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.953602   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.953746   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.953874   60146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/config.json ...
	I0610 11:56:55.954172   60146 machine.go:94] provisionDockerMachine start ...
	I0610 11:56:55.954203   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:55.954415   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:55.956837   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.957344   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.957361   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.957521   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:55.957710   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:55.957887   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:55.958055   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:55.958211   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:55.958425   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:55.958445   60146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:56:56.061295   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 11:56:56.061331   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.061559   60146 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281114"
	I0610 11:56:56.061588   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.061787   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.064578   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.064938   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.064975   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.065131   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.065383   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.065565   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.065681   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.065874   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.066079   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.066094   60146 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-281114 && echo "default-k8s-diff-port-281114" | sudo tee /etc/hostname
	I0610 11:56:56.183602   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-281114
	
	I0610 11:56:56.183626   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.186613   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.186986   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.187016   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.187260   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.187472   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.187656   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.187812   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.187993   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.188192   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.188220   60146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-281114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-281114/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-281114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:56:56.298027   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:56:56.298057   60146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 11:56:56.298076   60146 buildroot.go:174] setting up certificates
	I0610 11:56:56.298083   60146 provision.go:84] configureAuth start
	I0610 11:56:56.298094   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.298385   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:56.301219   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.301584   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.301614   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.301816   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.304010   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.304412   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.304438   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.304593   60146 provision.go:143] copyHostCerts
	I0610 11:56:56.304668   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 11:56:56.304681   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:56:56.304765   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 11:56:56.304874   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 11:56:56.304884   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:56:56.304924   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 11:56:56.305040   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 11:56:56.305050   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:56:56.305084   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 11:56:56.305153   60146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-281114 san=[127.0.0.1 192.168.50.222 default-k8s-diff-port-281114 localhost minikube]
	I0610 11:56:56.411016   60146 provision.go:177] copyRemoteCerts
	I0610 11:56:56.411072   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:56:56.411093   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.413736   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.414075   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.414122   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.414292   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.414498   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.414686   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.414785   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:56.495039   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 11:56:56.519750   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:56:56.543202   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0610 11:56:56.566420   60146 provision.go:87] duration metric: took 268.326859ms to configureAuth
	I0610 11:56:56.566449   60146 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:56:56.566653   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:56:56.566732   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.569742   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.570135   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.570159   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.570411   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.570635   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.570815   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.570969   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.571169   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.571334   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.571350   60146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 11:56:56.846705   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 11:56:56.846727   60146 machine.go:97] duration metric: took 892.536744ms to provisionDockerMachine
	I0610 11:56:56.846741   60146 start.go:293] postStartSetup for "default-k8s-diff-port-281114" (driver="kvm2")
	I0610 11:56:56.846753   60146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:56:56.846795   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:56.847123   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:56:56.847150   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.849968   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.850300   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.850322   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.850518   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.850706   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.850889   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.851010   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:56.935027   60146 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:56:56.939465   60146 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:56:56.939489   60146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 11:56:56.939558   60146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 11:56:56.939641   60146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 11:56:56.939728   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:56:56.948993   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:56:56.974611   60146 start.go:296] duration metric: took 127.85527ms for postStartSetup
	I0610 11:56:56.974655   60146 fix.go:56] duration metric: took 20.74355824s for fixHost
	I0610 11:56:56.974673   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.978036   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.978438   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.978471   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.978612   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.978804   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.978984   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.979157   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.979343   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.979506   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.979524   60146 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:56:57.081416   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718020617.058533839
	
	I0610 11:56:57.081444   60146 fix.go:216] guest clock: 1718020617.058533839
	I0610 11:56:57.081454   60146 fix.go:229] Guest: 2024-06-10 11:56:57.058533839 +0000 UTC Remote: 2024-06-10 11:56:56.974658577 +0000 UTC m=+303.333936196 (delta=83.875262ms)
	I0610 11:56:57.081476   60146 fix.go:200] guest clock delta is within tolerance: 83.875262ms
	I0610 11:56:57.081482   60146 start.go:83] releasing machines lock for "default-k8s-diff-port-281114", held for 20.850403793s
	I0610 11:56:57.081499   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.081775   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:57.084904   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.085408   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.085442   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.085619   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086222   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086432   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086519   60146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:56:57.086571   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:57.086660   60146 ssh_runner.go:195] Run: cat /version.json
	I0610 11:56:57.086694   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:57.089544   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.089869   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.089904   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.089931   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.090091   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:57.090259   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:57.090362   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.090388   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.090444   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:57.090539   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:57.090613   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:57.090667   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:57.090806   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:57.090969   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:57.215361   60146 ssh_runner.go:195] Run: systemctl --version
	I0610 11:56:57.221479   60146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 11:56:57.363318   60146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 11:56:57.369389   60146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:56:57.369465   60146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:56:57.385195   60146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:56:57.385217   60146 start.go:494] detecting cgroup driver to use...
	I0610 11:56:57.385284   60146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:56:57.404923   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:56:57.420158   60146 docker.go:217] disabling cri-docker service (if available) ...
	I0610 11:56:57.420204   60146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 11:56:57.434385   60146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 11:56:57.448340   60146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 11:56:57.574978   60146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 11:56:57.714523   60146 docker.go:233] disabling docker service ...
	I0610 11:56:57.714620   60146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 11:56:57.729914   60146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 11:56:57.742557   60146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 11:56:57.885770   60146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 11:56:58.018120   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 11:56:58.031606   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:56:58.049312   60146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 11:56:58.049389   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.059800   60146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 11:56:58.059877   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.071774   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.082332   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.093474   60146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:56:58.104231   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.114328   60146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.131812   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.142612   60146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:56:58.152681   60146 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 11:56:58.152750   60146 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 11:56:58.166120   60146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:56:58.176281   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:56:58.306558   60146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 11:56:58.446379   60146 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 11:56:58.446460   60146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 11:56:58.452523   60146 start.go:562] Will wait 60s for crictl version
	I0610 11:56:58.452619   60146 ssh_runner.go:195] Run: which crictl
	I0610 11:56:58.456611   60146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:56:58.503496   60146 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 11:56:58.503581   60146 ssh_runner.go:195] Run: crio --version
	I0610 11:56:58.532834   60146 ssh_runner.go:195] Run: crio --version
	I0610 11:56:58.562697   60146 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 11:56:58.563974   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:58.566760   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:58.567107   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:58.567142   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:58.567408   60146 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0610 11:56:58.571671   60146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:56:58.584423   60146 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:56:58.584535   60146 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:56:58.584588   60146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:56:58.622788   60146 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0610 11:56:58.622862   60146 ssh_runner.go:195] Run: which lz4
	I0610 11:56:58.627561   60146 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 11:56:58.632560   60146 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 11:56:58.632595   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0610 11:56:59.943375   60146 crio.go:462] duration metric: took 1.315853744s to copy over tarball
	I0610 11:56:59.943444   60146 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 11:57:02.167265   60146 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.223791523s)
	I0610 11:57:02.167299   60146 crio.go:469] duration metric: took 2.223894548s to extract the tarball
	I0610 11:57:02.167308   60146 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 11:57:02.206288   60146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:57:02.250013   60146 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 11:57:02.250034   60146 cache_images.go:84] Images are preloaded, skipping loading
	I0610 11:57:02.250041   60146 kubeadm.go:928] updating node { 192.168.50.222 8444 v1.30.1 crio true true} ...
	I0610 11:57:02.250163   60146 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-281114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:57:02.250261   60146 ssh_runner.go:195] Run: crio config
	I0610 11:57:02.305797   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:57:02.305822   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:57:02.305838   60146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:57:02.305867   60146 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.222 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-281114 NodeName:default-k8s-diff-port-281114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 11:57:02.306030   60146 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.222
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-281114"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:57:02.306111   60146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:57:02.316522   60146 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:57:02.316585   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 11:57:02.326138   60146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0610 11:57:02.342685   60146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:57:02.359693   60146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0610 11:57:02.375771   60146 ssh_runner.go:195] Run: grep 192.168.50.222	control-plane.minikube.internal$ /etc/hosts
	I0610 11:57:02.379280   60146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:57:02.390797   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:57:02.511286   60146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:57:02.529051   60146 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114 for IP: 192.168.50.222
	I0610 11:57:02.529076   60146 certs.go:194] generating shared ca certs ...
	I0610 11:57:02.529095   60146 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:57:02.529281   60146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 11:57:02.529358   60146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 11:57:02.529373   60146 certs.go:256] generating profile certs ...
	I0610 11:57:02.529492   60146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/client.key
	I0610 11:57:02.529576   60146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.key.d35a2a33
	I0610 11:57:02.529626   60146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.key
	I0610 11:57:02.529769   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 11:57:02.529810   60146 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 11:57:02.529823   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 11:57:02.529857   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 11:57:02.529893   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 11:57:02.529924   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 11:57:02.529981   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:57:02.531166   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:57:02.570183   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:57:02.607339   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:57:02.653464   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 11:57:02.694329   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0610 11:57:02.722420   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 11:57:02.747321   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:57:02.772755   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:57:02.797241   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:57:02.821892   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 11:57:02.846925   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 11:57:02.870986   60146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:57:02.889088   60146 ssh_runner.go:195] Run: openssl version
	I0610 11:57:02.894820   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 11:57:02.906689   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.911048   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.911095   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.916866   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 11:57:02.928405   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 11:57:02.941254   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.945849   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.945899   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.951833   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:57:02.963661   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:57:02.975117   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.979667   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.979731   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.985212   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:57:02.997007   60146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:57:03.001498   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 11:57:03.007549   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 11:57:03.013717   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 11:57:03.019947   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 11:57:03.025890   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 11:57:03.031443   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 11:57:03.036936   60146 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:57:03.037056   60146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 11:57:03.037111   60146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:57:03.088497   60146 cri.go:89] found id: ""
	I0610 11:57:03.088555   60146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0610 11:57:03.099358   60146 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 11:57:03.099381   60146 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 11:57:03.099386   60146 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 11:57:03.099436   60146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 11:57:03.109092   60146 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 11:57:03.110113   60146 kubeconfig.go:125] found "default-k8s-diff-port-281114" server: "https://192.168.50.222:8444"
	I0610 11:57:03.112565   60146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 11:57:03.122338   60146 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.222
	I0610 11:57:03.122370   60146 kubeadm.go:1154] stopping kube-system containers ...
	I0610 11:57:03.122392   60146 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0610 11:57:03.122453   60146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:57:03.159369   60146 cri.go:89] found id: ""
	I0610 11:57:03.159470   60146 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 11:57:03.176704   60146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:57:03.186957   60146 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:57:03.186977   60146 kubeadm.go:156] found existing configuration files:
	
	I0610 11:57:03.187040   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0610 11:57:03.196318   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:57:03.196397   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:57:03.205630   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0610 11:57:03.214480   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:57:03.214538   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:57:03.223939   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0610 11:57:03.232372   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:57:03.232422   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:57:03.241846   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0610 11:57:03.251014   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:57:03.251092   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:57:03.260115   60146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:57:03.269792   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:03.388582   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.274314   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.473968   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.531884   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.618371   60146 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:57:04.618464   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:05.118733   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:05.619107   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:06.118937   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:06.138176   60146 api_server.go:72] duration metric: took 1.519803379s to wait for apiserver process to appear ...
	I0610 11:57:06.138205   60146 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:57:06.138223   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.201655   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 11:57:09.201680   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 11:57:09.201691   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.305898   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:09.305934   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:09.639319   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.644006   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:09.644041   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:10.138712   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:10.144989   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:10.145024   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:10.638505   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:10.642825   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:10.642861   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:11.138360   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:11.143062   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:11.143087   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:11.639058   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:11.643394   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:11.643419   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:12.139125   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:12.143425   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:12.143452   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:12.639074   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:12.644121   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 200:
	ok
	I0610 11:57:12.650538   60146 api_server.go:141] control plane version: v1.30.1
	I0610 11:57:12.650570   60146 api_server.go:131] duration metric: took 6.512357672s to wait for apiserver health ...
	I0610 11:57:12.650581   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:57:12.650590   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:57:12.652548   60146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 11:57:12.653918   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 11:57:12.664536   60146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 11:57:12.685230   60146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:57:12.694511   60146 system_pods.go:59] 8 kube-system pods found
	I0610 11:57:12.694546   60146 system_pods.go:61] "coredns-7db6d8ff4d-5ngxc" [26f3438c-a6a2-43d5-b79d-991752b4cc10] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 11:57:12.694561   60146 system_pods.go:61] "etcd-default-k8s-diff-port-281114" [e8a3dc04-a9f0-4670-8256-7a0a617958ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 11:57:12.694610   60146 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281114" [45080cf7-94ee-4c55-a3b4-cfa8d3b4edbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 11:57:12.694626   60146 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281114" [3f51cb0c-bb90-4847-acd4-0ed8a58608ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 11:57:12.694633   60146 system_pods.go:61] "kube-proxy-896ts" [13b994b7-8d0e-4e3d-9902-3bdd7a9ab949] Running
	I0610 11:57:12.694648   60146 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281114" [c205a8b5-e970-40ed-83d7-462781bcf41f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 11:57:12.694659   60146 system_pods.go:61] "metrics-server-569cc877fc-jhv6f" [60a2e6ad-714a-4c6d-b586-232d130397a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:57:12.694665   60146 system_pods.go:61] "storage-provisioner" [b54a4493-2c6d-4a5e-b74c-ba9863979688] Running
	I0610 11:57:12.694675   60146 system_pods.go:74] duration metric: took 9.424371ms to wait for pod list to return data ...
	I0610 11:57:12.694687   60146 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:57:12.697547   60146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:57:12.697571   60146 node_conditions.go:123] node cpu capacity is 2
	I0610 11:57:12.697583   60146 node_conditions.go:105] duration metric: took 2.887217ms to run NodePressure ...
	I0610 11:57:12.697633   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:12.966838   60146 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0610 11:57:12.971616   60146 kubeadm.go:733] kubelet initialised
	I0610 11:57:12.971641   60146 kubeadm.go:734] duration metric: took 4.781436ms waiting for restarted kubelet to initialise ...
	I0610 11:57:12.971649   60146 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:57:12.977162   60146 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:14.984174   60146 pod_ready.go:102] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:16.984365   60146 pod_ready.go:102] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:18.985423   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.985447   60146 pod_ready.go:81] duration metric: took 6.008259879s for pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.985459   60146 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.992228   60146 pod_ready.go:92] pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.992249   60146 pod_ready.go:81] duration metric: took 6.782049ms for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.992261   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.998328   60146 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.998354   60146 pod_ready.go:81] duration metric: took 6.080448ms for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.998363   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:21.004441   60146 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:23.005035   60146 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:23.505290   60146 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.505316   60146 pod_ready.go:81] duration metric: took 4.506946099s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.505326   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-896ts" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.510714   60146 pod_ready.go:92] pod "kube-proxy-896ts" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.510733   60146 pod_ready.go:81] duration metric: took 5.402289ms for pod "kube-proxy-896ts" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.510741   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.515120   60146 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.515138   60146 pod_ready.go:81] duration metric: took 4.391539ms for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.515145   60146 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:25.522456   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:28.021723   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:30.521428   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:32.521868   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:35.020800   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:37.021406   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:39.022230   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:41.026828   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:43.521675   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:46.021385   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:48.521085   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:50.521489   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:53.020867   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:55.021644   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:57.521383   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:59.521662   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:02.021864   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:04.521572   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:07.021580   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:09.521128   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:11.522117   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:14.021270   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:16.022304   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:18.521534   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:21.021061   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:23.021721   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:25.521779   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:28.021005   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:30.023892   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:32.521068   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:35.022247   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:37.022812   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:39.521194   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:41.521813   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:43.521847   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:46.021646   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:48.521791   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:51.020662   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:53.020752   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:55.021736   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:57.521819   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:00.021201   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:02.521497   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:05.021115   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:07.521673   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:10.022328   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:12.521244   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:15.020407   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:17.021142   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:19.021398   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:21.021949   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:23.022714   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:25.521324   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:27.523011   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:30.021380   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:32.021456   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:34.021713   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:36.523229   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:39.023269   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:41.521241   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:43.522882   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:46.021368   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:48.021781   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:50.022979   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:52.522357   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:55.022181   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:57.521630   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:00.022732   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:02.524425   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:05.021218   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:07.021736   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:09.521121   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:12.022455   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:14.023274   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:16.521626   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:19.021624   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:21.021728   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:23.022457   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:25.023406   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:27.523393   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:30.022146   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:32.520816   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:34.522050   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:36.522345   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:39.021544   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:41.022726   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:43.520941   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:45.521181   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:47.522257   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:49.522829   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:51.523346   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:54.020982   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:56.021367   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:58.021467   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:00.021643   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:02.021791   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:04.021864   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:06.021968   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:08.521556   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:10.521588   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:12.521870   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:15.025925   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:17.523018   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:20.022903   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:22.521723   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:23.515523   60146 pod_ready.go:81] duration metric: took 4m0.000361045s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" ...
	E0610 12:01:23.515558   60146 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0610 12:01:23.515582   60146 pod_ready.go:38] duration metric: took 4m10.543923644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:01:23.515614   60146 kubeadm.go:591] duration metric: took 4m20.4162222s to restartPrimaryControlPlane
	W0610 12:01:23.515715   60146 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 12:01:23.515751   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 12:01:54.687867   60146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.172093979s)
	I0610 12:01:54.687931   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:01:54.704702   60146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 12:01:54.714940   60146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 12:01:54.724675   60146 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:01:54.724702   60146 kubeadm.go:156] found existing configuration files:
	
	I0610 12:01:54.724749   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0610 12:01:54.734652   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:01:54.734726   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 12:01:54.744642   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0610 12:01:54.755297   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:01:54.755375   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 12:01:54.765800   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0610 12:01:54.775568   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:01:54.775636   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 12:01:54.785076   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0610 12:01:54.793645   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:01:54.793706   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 12:01:54.803137   60146 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 12:01:54.855022   60146 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 12:01:54.855094   60146 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 12:01:54.995399   60146 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 12:01:54.995511   60146 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 12:01:54.995622   60146 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 12:01:55.194136   60146 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:01:55.196296   60146 out.go:204]   - Generating certificates and keys ...
	I0610 12:01:55.196396   60146 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 12:01:55.196475   60146 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 12:01:55.196575   60146 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 12:01:55.196680   60146 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 12:01:55.196792   60146 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 12:01:55.196874   60146 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 12:01:55.196984   60146 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 12:01:55.197077   60146 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 12:01:55.197158   60146 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 12:01:55.197231   60146 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 12:01:55.197265   60146 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 12:01:55.197320   60146 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:01:55.299197   60146 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:01:55.490367   60146 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:01:55.751377   60146 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:01:55.863144   60146 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:01:56.112395   60146 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:01:56.113059   60146 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:01:56.118410   60146 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:01:56.120277   60146 out.go:204]   - Booting up control plane ...
	I0610 12:01:56.120416   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:01:56.120503   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:01:56.120565   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:01:56.138057   60146 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:01:56.138509   60146 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:01:56.138563   60146 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 12:01:56.263559   60146 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 12:01:56.263688   60146 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:01:57.264829   60146 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001316355s
	I0610 12:01:57.264927   60146 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 12:02:02.267632   60146 kubeadm.go:309] [api-check] The API server is healthy after 5.001644567s
	I0610 12:02:02.282693   60146 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 12:02:02.305741   60146 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 12:02:02.341283   60146 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 12:02:02.341527   60146 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-281114 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 12:02:02.355256   60146 kubeadm.go:309] [bootstrap-token] Using token: mkpvnr.wlx5xvctjlg8pi72
	I0610 12:02:02.356920   60146 out.go:204]   - Configuring RBAC rules ...
	I0610 12:02:02.357052   60146 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 12:02:02.367773   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 12:02:02.376921   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 12:02:02.386582   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 12:02:02.390887   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 12:02:02.399245   60146 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 12:02:02.674008   60146 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 12:02:03.137504   60146 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 12:02:03.673560   60146 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 12:02:03.674588   60146 kubeadm.go:309] 
	I0610 12:02:03.674677   60146 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 12:02:03.674694   60146 kubeadm.go:309] 
	I0610 12:02:03.674774   60146 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 12:02:03.674784   60146 kubeadm.go:309] 
	I0610 12:02:03.674813   60146 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 12:02:03.674924   60146 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 12:02:03.675014   60146 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 12:02:03.675026   60146 kubeadm.go:309] 
	I0610 12:02:03.675128   60146 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 12:02:03.675150   60146 kubeadm.go:309] 
	I0610 12:02:03.675225   60146 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 12:02:03.675234   60146 kubeadm.go:309] 
	I0610 12:02:03.675344   60146 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 12:02:03.675460   60146 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 12:02:03.675587   60146 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 12:02:03.677879   60146 kubeadm.go:309] 
	I0610 12:02:03.677961   60146 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 12:02:03.678057   60146 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 12:02:03.678068   60146 kubeadm.go:309] 
	I0610 12:02:03.678160   60146 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token mkpvnr.wlx5xvctjlg8pi72 \
	I0610 12:02:03.678304   60146 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 12:02:03.678338   60146 kubeadm.go:309] 	--control-plane 
	I0610 12:02:03.678348   60146 kubeadm.go:309] 
	I0610 12:02:03.678446   60146 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 12:02:03.678460   60146 kubeadm.go:309] 
	I0610 12:02:03.678580   60146 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token mkpvnr.wlx5xvctjlg8pi72 \
	I0610 12:02:03.678726   60146 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 12:02:03.678869   60146 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:02:03.678886   60146 cni.go:84] Creating CNI manager for ""
	I0610 12:02:03.678896   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 12:02:03.681019   60146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 12:02:03.682415   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 12:02:03.693028   60146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 12:02:03.711436   60146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 12:02:03.711534   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:03.711611   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-281114 minikube.k8s.io/updated_at=2024_06_10T12_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=default-k8s-diff-port-281114 minikube.k8s.io/primary=true
	I0610 12:02:03.888463   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:03.926946   60146 ops.go:34] apiserver oom_adj: -16
	I0610 12:02:04.389105   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:04.888545   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:05.389096   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:05.888853   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:06.389522   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:06.889491   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:07.389417   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:07.889485   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:08.388869   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:08.889480   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:09.389130   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:09.889052   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:10.389053   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:10.889177   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:11.388985   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:11.889405   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:12.388805   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:12.889139   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:13.389072   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:13.888843   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:14.389349   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:14.888798   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:15.388800   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:15.888491   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:16.389394   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:16.889175   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:17.007766   60146 kubeadm.go:1107] duration metric: took 13.296278569s to wait for elevateKubeSystemPrivileges
	W0610 12:02:17.007804   60146 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 12:02:17.007813   60146 kubeadm.go:393] duration metric: took 5m13.970894294s to StartCluster
	I0610 12:02:17.007835   60146 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:02:17.007914   60146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 12:02:17.009456   60146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:02:17.009751   60146 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 12:02:17.011669   60146 out.go:177] * Verifying Kubernetes components...
	I0610 12:02:17.009833   60146 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 12:02:17.011705   60146 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013481   60146 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.013496   60146 addons.go:243] addon storage-provisioner should already be in state true
	I0610 12:02:17.013539   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.011715   60146 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013612   60146 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.013628   60146 addons.go:243] addon metrics-server should already be in state true
	I0610 12:02:17.013669   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.009996   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:02:17.011717   60146 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013437   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:02:17.013792   60146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-281114"
	I0610 12:02:17.013961   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014009   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.014043   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014066   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.014174   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014211   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.030604   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43907
	I0610 12:02:17.031126   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.031701   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.031729   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.032073   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.032272   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.034510   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0610 12:02:17.034557   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42127
	I0610 12:02:17.034950   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.035130   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.035437   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.035459   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.035888   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.035968   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.035986   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.036820   60146 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.036839   60146 addons.go:243] addon default-storageclass should already be in state true
	I0610 12:02:17.036865   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.037323   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.037345   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.038068   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.038408   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.038428   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.039402   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.039436   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.052901   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0610 12:02:17.053390   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.053936   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.053959   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.054226   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
	I0610 12:02:17.054303   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.054569   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.054905   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.054933   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.055019   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.055040   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.055448   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.055637   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.057623   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.059785   60146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:02:17.058684   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0610 12:02:17.060310   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.061277   60146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:02:17.061292   60146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 12:02:17.061311   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.061738   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.061762   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.062097   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.062405   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.064169   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.065635   60146 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0610 12:02:17.065251   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.066901   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 12:02:17.065677   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.066921   60146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 12:02:17.066945   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.066952   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.065921   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.067144   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.067267   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.067437   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.070722   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.071110   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.071125   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.071422   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.071582   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.071714   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.072048   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.073784   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46447
	I0610 12:02:17.074157   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.074645   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.074659   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.074986   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.075129   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.076879   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.077138   60146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 12:02:17.077153   60146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 12:02:17.077170   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.080253   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.080667   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.080698   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.080862   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.081088   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.081280   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.081466   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.226805   60146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:02:17.257188   60146 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-281114" to be "Ready" ...
	I0610 12:02:17.266803   60146 node_ready.go:49] node "default-k8s-diff-port-281114" has status "Ready":"True"
	I0610 12:02:17.266829   60146 node_ready.go:38] duration metric: took 9.610473ms for node "default-k8s-diff-port-281114" to be "Ready" ...
	I0610 12:02:17.266840   60146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:02:17.273132   60146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:17.327416   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 12:02:17.327442   60146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0610 12:02:17.366670   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:02:17.367685   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 12:02:17.378833   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 12:02:17.378858   60146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 12:02:17.436533   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 12:02:17.436558   60146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 12:02:17.490426   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 12:02:18.279491   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.279516   60146 pod_ready.go:81] duration metric: took 1.006353706s for pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.279527   60146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.286003   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.286024   60146 pod_ready.go:81] duration metric: took 6.488693ms for pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.286036   60146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.295995   60146 pod_ready.go:92] pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.296015   60146 pod_ready.go:81] duration metric: took 9.973573ms for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.296024   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.302383   60146 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.302407   60146 pod_ready.go:81] duration metric: took 6.376673ms for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.302418   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.421208   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.054498973s)
	I0610 12:02:18.421244   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.053533062s)
	I0610 12:02:18.421270   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421278   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421285   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421290   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421645   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.421691   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.421706   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.421715   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421717   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.421723   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421726   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.421734   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421743   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.422083   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.422103   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.422122   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.422123   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.422132   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.453377   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.453408   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.453803   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.453806   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.453831   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.475839   60146 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.475867   60146 pod_ready.go:81] duration metric: took 173.440125ms for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.475881   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wh756" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.673586   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183120727s)
	I0610 12:02:18.673646   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.673662   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.673961   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.674001   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.674010   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.674020   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.674045   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.674315   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.674356   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.674365   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.674376   60146 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-281114"
	I0610 12:02:18.676402   60146 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0610 12:02:18.677734   60146 addons.go:510] duration metric: took 1.667897142s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0610 12:02:19.660297   60146 pod_ready.go:92] pod "kube-proxy-wh756" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:19.660327   60146 pod_ready.go:81] duration metric: took 1.184438894s for pod "kube-proxy-wh756" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:19.660340   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:20.060583   60146 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:20.060607   60146 pod_ready.go:81] duration metric: took 400.25949ms for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:20.060616   60146 pod_ready.go:38] duration metric: took 2.793765456s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:02:20.060634   60146 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:02:20.060693   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:02:20.076416   60146 api_server.go:72] duration metric: took 3.066630137s to wait for apiserver process to appear ...
	I0610 12:02:20.076441   60146 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:02:20.076462   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 12:02:20.081614   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 200:
	ok
	I0610 12:02:20.082567   60146 api_server.go:141] control plane version: v1.30.1
	I0610 12:02:20.082589   60146 api_server.go:131] duration metric: took 6.142085ms to wait for apiserver health ...
	I0610 12:02:20.082597   60146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:02:20.263766   60146 system_pods.go:59] 9 kube-system pods found
	I0610 12:02:20.263803   60146 system_pods.go:61] "coredns-7db6d8ff4d-5fgtk" [03d948ca-122a-4042-8371-8a9422c187bc] Running
	I0610 12:02:20.263808   60146 system_pods.go:61] "coredns-7db6d8ff4d-fg8xx" [e91ae09c-8821-4843-8c0d-ea734433c213] Running
	I0610 12:02:20.263815   60146 system_pods.go:61] "etcd-default-k8s-diff-port-281114" [110985f7-c57e-453d-8bda-c5104d879eb4] Running
	I0610 12:02:20.263821   60146 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281114" [e62181ca-648e-4d5f-b2a7-00bed06f3bd2] Running
	I0610 12:02:20.263827   60146 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281114" [109f02bd-8c9c-40f6-98e8-5cf2b6d97deb] Running
	I0610 12:02:20.263832   60146 system_pods.go:61] "kube-proxy-wh756" [57cbf3d6-c149-4ae1-84d3-6df6a53ea091] Running
	I0610 12:02:20.263838   60146 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281114" [00889b82-f4fc-4a98-86cd-ab1028dc4461] Running
	I0610 12:02:20.263848   60146 system_pods.go:61] "metrics-server-569cc877fc-j58s9" [f1c91612-b967-447e-bc71-13ba0d11864b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 12:02:20.263854   60146 system_pods.go:61] "storage-provisioner" [8df0a38c-5e91-4b10-a303-c4eff9545669] Running
	I0610 12:02:20.263866   60146 system_pods.go:74] duration metric: took 181.261717ms to wait for pod list to return data ...
	I0610 12:02:20.263878   60146 default_sa.go:34] waiting for default service account to be created ...
	I0610 12:02:20.460812   60146 default_sa.go:45] found service account: "default"
	I0610 12:02:20.460848   60146 default_sa.go:55] duration metric: took 196.961501ms for default service account to be created ...
	I0610 12:02:20.460860   60146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 12:02:20.664565   60146 system_pods.go:86] 9 kube-system pods found
	I0610 12:02:20.664591   60146 system_pods.go:89] "coredns-7db6d8ff4d-5fgtk" [03d948ca-122a-4042-8371-8a9422c187bc] Running
	I0610 12:02:20.664596   60146 system_pods.go:89] "coredns-7db6d8ff4d-fg8xx" [e91ae09c-8821-4843-8c0d-ea734433c213] Running
	I0610 12:02:20.664601   60146 system_pods.go:89] "etcd-default-k8s-diff-port-281114" [110985f7-c57e-453d-8bda-c5104d879eb4] Running
	I0610 12:02:20.664606   60146 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-281114" [e62181ca-648e-4d5f-b2a7-00bed06f3bd2] Running
	I0610 12:02:20.664610   60146 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-281114" [109f02bd-8c9c-40f6-98e8-5cf2b6d97deb] Running
	I0610 12:02:20.664614   60146 system_pods.go:89] "kube-proxy-wh756" [57cbf3d6-c149-4ae1-84d3-6df6a53ea091] Running
	I0610 12:02:20.664618   60146 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-281114" [00889b82-f4fc-4a98-86cd-ab1028dc4461] Running
	I0610 12:02:20.664626   60146 system_pods.go:89] "metrics-server-569cc877fc-j58s9" [f1c91612-b967-447e-bc71-13ba0d11864b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 12:02:20.664631   60146 system_pods.go:89] "storage-provisioner" [8df0a38c-5e91-4b10-a303-c4eff9545669] Running
	I0610 12:02:20.664640   60146 system_pods.go:126] duration metric: took 203.773693ms to wait for k8s-apps to be running ...
	I0610 12:02:20.664649   60146 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:02:20.664690   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:02:20.681388   60146 system_svc.go:56] duration metric: took 16.731528ms WaitForService to wait for kubelet
	I0610 12:02:20.681411   60146 kubeadm.go:576] duration metric: took 3.671630148s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:02:20.681432   60146 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:02:20.861346   60146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:02:20.861369   60146 node_conditions.go:123] node cpu capacity is 2
	I0610 12:02:20.861379   60146 node_conditions.go:105] duration metric: took 179.94199ms to run NodePressure ...
	I0610 12:02:20.861390   60146 start.go:240] waiting for startup goroutines ...
	I0610 12:02:20.861396   60146 start.go:245] waiting for cluster config update ...
	I0610 12:02:20.861405   60146 start.go:254] writing updated cluster config ...
	I0610 12:02:20.861658   60146 ssh_runner.go:195] Run: rm -f paused
	I0610 12:02:20.911134   60146 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 12:02:20.913129   60146 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-281114" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.488368402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021150488337334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72852c3d-3cf5-4e77-89d1-9939a6df1d81 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.489195134Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9912131-ee78-42a4-9c16-5fe0a21d5e3f name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.489272760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9912131-ee78-42a4-9c16-5fe0a21d5e3f name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.489310383Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e9912131-ee78-42a4-9c16-5fe0a21d5e3f name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.522856010Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7da37ec4-1897-470e-8aa9-0ef511e66bd1 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.522946714Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7da37ec4-1897-470e-8aa9-0ef511e66bd1 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.524244112Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6593bbd9-68e6-45bc-ad2a-09db97694b06 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.524705952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021150524678968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6593bbd9-68e6-45bc-ad2a-09db97694b06 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.525376413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24c2eb77-b7e8-4c9c-a719-65cc14dbd451 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.525437172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24c2eb77-b7e8-4c9c-a719-65cc14dbd451 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.525477859Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=24c2eb77-b7e8-4c9c-a719-65cc14dbd451 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.556502367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3bc0288d-bc6c-4ba8-9b9f-15a94079d80f name=/runtime.v1.RuntimeService/Version
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.556589791Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bc0288d-bc6c-4ba8-9b9f-15a94079d80f name=/runtime.v1.RuntimeService/Version
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.557827956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0adcc955-7dd8-41b6-840b-8c7e006f882d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.558321042Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021150558289007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0adcc955-7dd8-41b6-840b-8c7e006f882d name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.558906245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe6a951b-be2c-49af-b24d-a2946309b752 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.558966425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe6a951b-be2c-49af-b24d-a2946309b752 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.559002433Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fe6a951b-be2c-49af-b24d-a2946309b752 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.591328016Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e76a5618-c4e1-4dc2-a467-78d1925c3a5f name=/runtime.v1.RuntimeService/Version
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.591410488Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e76a5618-c4e1-4dc2-a467-78d1925c3a5f name=/runtime.v1.RuntimeService/Version
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.592474199Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9d1c370-ec50-4733-8f66-ae9c5fdc6b89 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.592924061Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021150592897988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9d1c370-ec50-4733-8f66-ae9c5fdc6b89 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.593650573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=676153d4-7308-4143-bc93-4fa77615ab4a name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.593707255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=676153d4-7308-4143-bc93-4fa77615ab4a name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:05:50 old-k8s-version-166693 crio[645]: time="2024-06-10 12:05:50.593775401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=676153d4-7308-4143-bc93-4fa77615ab4a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun10 11:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052778] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039241] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662307] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.954746] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609904] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.687001] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.069246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073631] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.221904] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.142650] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.284629] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.510984] systemd-fstab-generator[829]: Ignoring "noauto" option for root device
	[  +0.065299] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.018208] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[ +11.261041] kauditd_printk_skb: 46 callbacks suppressed
	[Jun10 11:52] systemd-fstab-generator[5086]: Ignoring "noauto" option for root device
	[Jun10 11:54] systemd-fstab-generator[5370]: Ignoring "noauto" option for root device
	[  +0.069423] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:05:50 up 17 min,  0 users,  load average: 0.14, 0.07, 0.03
	Linux old-k8s-version-166693 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00076aef0)
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00076def0, 0x4f0ac20, 0xc00054afa0, 0x1, 0xc0001000c0)
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000c16e00, 0xc0001000c0)
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0000e2df0, 0xc000668e20)
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jun 10 12:05:47 old-k8s-version-166693 kubelet[6537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jun 10 12:05:47 old-k8s-version-166693 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 10 12:05:47 old-k8s-version-166693 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 10 12:05:48 old-k8s-version-166693 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Jun 10 12:05:48 old-k8s-version-166693 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 10 12:05:48 old-k8s-version-166693 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 10 12:05:48 old-k8s-version-166693 kubelet[6545]: I0610 12:05:48.201487    6545 server.go:416] Version: v1.20.0
	Jun 10 12:05:48 old-k8s-version-166693 kubelet[6545]: I0610 12:05:48.201893    6545 server.go:837] Client rotation is on, will bootstrap in background
	Jun 10 12:05:48 old-k8s-version-166693 kubelet[6545]: I0610 12:05:48.203821    6545 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 10 12:05:48 old-k8s-version-166693 kubelet[6545]: W0610 12:05:48.204717    6545 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 10 12:05:48 old-k8s-version-166693 kubelet[6545]: I0610 12:05:48.205155    6545 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166693 -n old-k8s-version-166693
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 2 (223.208374ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-166693" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-06-10 12:11:21.517516458 +0000 UTC m=+6634.603547960
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-281114 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-281114 logs -n 25: (1.611586363s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p newest-cni-003554                  | newest-cni-003554 | jenkins | v1.33.1 | 10 Jun 24 12:10 UTC | 10 Jun 24 12:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                   |         |         |                     |                     |
	| start   | -p newest-cni-003554 --memory=2200 --alsologtostderr   | newest-cni-003554 | jenkins | v1.33.1 | 10 Jun 24 12:10 UTC | 10 Jun 24 12:10 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                   |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                   |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                   |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                   |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                   |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                   |         |         |                     |                     |
	| image   | newest-cni-003554 image list                           | newest-cni-003554 | jenkins | v1.33.1 | 10 Jun 24 12:10 UTC | 10 Jun 24 12:10 UTC |
	|         | --format=json                                          |                   |         |         |                     |                     |
	| pause   | -p newest-cni-003554                                   | newest-cni-003554 | jenkins | v1.33.1 | 10 Jun 24 12:10 UTC | 10 Jun 24 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                   |         |         |                     |                     |
	| unpause | -p newest-cni-003554                                   | newest-cni-003554 | jenkins | v1.33.1 | 10 Jun 24 12:10 UTC | 10 Jun 24 12:10 UTC |
	|         | --alsologtostderr -v=1                                 |                   |         |         |                     |                     |
	| delete  | -p newest-cni-003554                                   | newest-cni-003554 | jenkins | v1.33.1 | 10 Jun 24 12:10 UTC | 10 Jun 24 12:10 UTC |
	| delete  | -p newest-cni-003554                                   | newest-cni-003554 | jenkins | v1.33.1 | 10 Jun 24 12:10 UTC | 10 Jun 24 12:10 UTC |
	| start   | -p calico-491653 --memory=3072                         | calico-491653     | jenkins | v1.33.1 | 10 Jun 24 12:10 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                   |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                   |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                   |         |         |                     |                     |
	|         | --container-runtime=crio                               |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 pgrep -a                             | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:10 UTC | 10 Jun 24 12:10 UTC |
	|         | kubelet                                                |                   |         |         |                     |                     |
	| ssh     | -p auto-491653 pgrep -a                                | auto-491653       | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | kubelet                                                |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo cat                             | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | /etc/nsswitch.conf                                     |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo cat                             | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | /etc/hosts                                             |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo cat                             | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | /etc/resolv.conf                                       |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo crictl                          | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | pods                                                   |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo crictl                          | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | ps --all                                               |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo find                            | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | /etc/cni -type f -exec sh -c                           |                   |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo ip a s                          | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	| ssh     | -p kindnet-491653 sudo ip r s                          | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	| ssh     | -p kindnet-491653 sudo                                 | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | iptables-save                                          |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo                                 | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | iptables -t nat -L -n -v                               |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo                                 | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | systemctl status kubelet --all                         |                   |         |         |                     |                     |
	|         | --full --no-pager                                      |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo                                 | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | systemctl cat kubelet                                  |                   |         |         |                     |                     |
	|         | --no-pager                                             |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo                                 | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | journalctl -xeu kubelet --all                          |                   |         |         |                     |                     |
	|         | --full --no-pager                                      |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo cat                             | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC | 10 Jun 24 12:11 UTC |
	|         | /etc/kubernetes/kubelet.conf                           |                   |         |         |                     |                     |
	| ssh     | -p kindnet-491653 sudo cat                             | kindnet-491653    | jenkins | v1.33.1 | 10 Jun 24 12:11 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                           |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 12:10:55
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 12:10:55.285142   66701 out.go:291] Setting OutFile to fd 1 ...
	I0610 12:10:55.285410   66701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:10:55.285420   66701 out.go:304] Setting ErrFile to fd 2...
	I0610 12:10:55.285424   66701 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:10:55.285614   66701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 12:10:55.286394   66701 out.go:298] Setting JSON to false
	I0610 12:10:55.287455   66701 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6796,"bootTime":1718014659,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 12:10:55.287515   66701 start.go:139] virtualization: kvm guest
	I0610 12:10:55.289922   66701 out.go:177] * [calico-491653] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 12:10:55.291429   66701 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 12:10:55.291398   66701 notify.go:220] Checking for updates...
	I0610 12:10:55.292992   66701 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 12:10:55.294756   66701 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 12:10:55.296454   66701 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 12:10:55.298574   66701 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 12:10:55.300218   66701 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 12:10:55.302016   66701 config.go:182] Loaded profile config "auto-491653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:10:55.302166   66701 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:10:55.302283   66701 config.go:182] Loaded profile config "kindnet-491653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:10:55.302387   66701 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 12:10:55.339966   66701 out.go:177] * Using the kvm2 driver based on user configuration
	I0610 12:10:55.341479   66701 start.go:297] selected driver: kvm2
	I0610 12:10:55.341495   66701 start.go:901] validating driver "kvm2" against <nil>
	I0610 12:10:55.341506   66701 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 12:10:55.342206   66701 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:10:55.342278   66701 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 12:10:55.358386   66701 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 12:10:55.358456   66701 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 12:10:55.358778   66701 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:10:55.358848   66701 cni.go:84] Creating CNI manager for "calico"
	I0610 12:10:55.358862   66701 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0610 12:10:55.358926   66701 start.go:340] cluster config:
	{Name:calico-491653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:calico-491653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:10:55.359068   66701 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:10:55.361140   66701 out.go:177] * Starting "calico-491653" primary control-plane node in "calico-491653" cluster
	I0610 12:10:55.362525   66701 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 12:10:55.362589   66701 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 12:10:55.362602   66701 cache.go:56] Caching tarball of preloaded images
	I0610 12:10:55.362689   66701 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 12:10:55.362699   66701 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 12:10:55.362782   66701 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/calico-491653/config.json ...
	I0610 12:10:55.362799   66701 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/calico-491653/config.json: {Name:mk2bca090c58baad3020841f4dc58e54facacd93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:10:55.362940   66701 start.go:360] acquireMachinesLock for calico-491653: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:10:55.362995   66701 start.go:364] duration metric: took 24.467µs to acquireMachinesLock for "calico-491653"
	I0610 12:10:55.363020   66701 start.go:93] Provisioning new machine with config: &{Name:calico-491653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:calico-491653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 12:10:55.363110   66701 start.go:125] createHost starting for "" (driver="kvm2")
	I0610 12:10:52.916645   64909 pod_ready.go:102] pod "coredns-7db6d8ff4d-fsqqw" in "kube-system" namespace has status "Ready":"False"
	I0610 12:10:55.416680   64909 pod_ready.go:102] pod "coredns-7db6d8ff4d-fsqqw" in "kube-system" namespace has status "Ready":"False"
	I0610 12:10:55.366407   66701 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 12:10:55.366977   66701 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:10:55.367037   66701 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:10:55.383175   66701 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38801
	I0610 12:10:55.383626   66701 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:10:55.384165   66701 main.go:141] libmachine: Using API Version  1
	I0610 12:10:55.384192   66701 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:10:55.384586   66701 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:10:55.384838   66701 main.go:141] libmachine: (calico-491653) Calling .GetMachineName
	I0610 12:10:55.385025   66701 main.go:141] libmachine: (calico-491653) Calling .DriverName
	I0610 12:10:55.385178   66701 start.go:159] libmachine.API.Create for "calico-491653" (driver="kvm2")
	I0610 12:10:55.385204   66701 client.go:168] LocalClient.Create starting
	I0610 12:10:55.385239   66701 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 12:10:55.385276   66701 main.go:141] libmachine: Decoding PEM data...
	I0610 12:10:55.385294   66701 main.go:141] libmachine: Parsing certificate...
	I0610 12:10:55.385367   66701 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 12:10:55.385395   66701 main.go:141] libmachine: Decoding PEM data...
	I0610 12:10:55.385413   66701 main.go:141] libmachine: Parsing certificate...
	I0610 12:10:55.385437   66701 main.go:141] libmachine: Running pre-create checks...
	I0610 12:10:55.385450   66701 main.go:141] libmachine: (calico-491653) Calling .PreCreateCheck
	I0610 12:10:55.385820   66701 main.go:141] libmachine: (calico-491653) Calling .GetConfigRaw
	I0610 12:10:55.386241   66701 main.go:141] libmachine: Creating machine...
	I0610 12:10:55.386258   66701 main.go:141] libmachine: (calico-491653) Calling .Create
	I0610 12:10:55.386452   66701 main.go:141] libmachine: (calico-491653) Creating KVM machine...
	I0610 12:10:55.387802   66701 main.go:141] libmachine: (calico-491653) DBG | found existing default KVM network
	I0610 12:10:55.389481   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:55.389312   66724 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:3c:0c} reservation:<nil>}
	I0610 12:10:55.390616   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:55.390538   66724 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:0c:3e:0d} reservation:<nil>}
	I0610 12:10:55.391558   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:55.391452   66724 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:aa:cb:fa} reservation:<nil>}
	I0610 12:10:55.392657   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:55.392582   66724 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00028bac0}
	I0610 12:10:55.392720   66701 main.go:141] libmachine: (calico-491653) DBG | created network xml: 
	I0610 12:10:55.392746   66701 main.go:141] libmachine: (calico-491653) DBG | <network>
	I0610 12:10:55.392755   66701 main.go:141] libmachine: (calico-491653) DBG |   <name>mk-calico-491653</name>
	I0610 12:10:55.392764   66701 main.go:141] libmachine: (calico-491653) DBG |   <dns enable='no'/>
	I0610 12:10:55.392770   66701 main.go:141] libmachine: (calico-491653) DBG |   
	I0610 12:10:55.392782   66701 main.go:141] libmachine: (calico-491653) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0610 12:10:55.392791   66701 main.go:141] libmachine: (calico-491653) DBG |     <dhcp>
	I0610 12:10:55.392796   66701 main.go:141] libmachine: (calico-491653) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0610 12:10:55.392801   66701 main.go:141] libmachine: (calico-491653) DBG |     </dhcp>
	I0610 12:10:55.392806   66701 main.go:141] libmachine: (calico-491653) DBG |   </ip>
	I0610 12:10:55.392811   66701 main.go:141] libmachine: (calico-491653) DBG |   
	I0610 12:10:55.392815   66701 main.go:141] libmachine: (calico-491653) DBG | </network>
	I0610 12:10:55.392837   66701 main.go:141] libmachine: (calico-491653) DBG | 
	I0610 12:10:55.398444   66701 main.go:141] libmachine: (calico-491653) DBG | trying to create private KVM network mk-calico-491653 192.168.72.0/24...
	I0610 12:10:55.475297   66701 main.go:141] libmachine: (calico-491653) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/calico-491653 ...
	I0610 12:10:55.475332   66701 main.go:141] libmachine: (calico-491653) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 12:10:55.475344   66701 main.go:141] libmachine: (calico-491653) DBG | private KVM network mk-calico-491653 192.168.72.0/24 created
	I0610 12:10:55.475363   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:55.475228   66724 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 12:10:55.475425   66701 main.go:141] libmachine: (calico-491653) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 12:10:55.724805   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:55.724650   66724 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/calico-491653/id_rsa...
	I0610 12:10:55.850590   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:55.850440   66724 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/calico-491653/calico-491653.rawdisk...
	I0610 12:10:55.850622   66701 main.go:141] libmachine: (calico-491653) DBG | Writing magic tar header
	I0610 12:10:55.850632   66701 main.go:141] libmachine: (calico-491653) DBG | Writing SSH key tar header
	I0610 12:10:55.850640   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:55.850565   66724 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/calico-491653 ...
	I0610 12:10:55.850662   66701 main.go:141] libmachine: (calico-491653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/calico-491653
	I0610 12:10:55.850705   66701 main.go:141] libmachine: (calico-491653) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/calico-491653 (perms=drwx------)
	I0610 12:10:55.850725   66701 main.go:141] libmachine: (calico-491653) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 12:10:55.850738   66701 main.go:141] libmachine: (calico-491653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 12:10:55.850751   66701 main.go:141] libmachine: (calico-491653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 12:10:55.850759   66701 main.go:141] libmachine: (calico-491653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 12:10:55.850770   66701 main.go:141] libmachine: (calico-491653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 12:10:55.850780   66701 main.go:141] libmachine: (calico-491653) DBG | Checking permissions on dir: /home/jenkins
	I0610 12:10:55.850802   66701 main.go:141] libmachine: (calico-491653) DBG | Checking permissions on dir: /home
	I0610 12:10:55.850828   66701 main.go:141] libmachine: (calico-491653) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 12:10:55.850839   66701 main.go:141] libmachine: (calico-491653) DBG | Skipping /home - not owner
	I0610 12:10:55.850875   66701 main.go:141] libmachine: (calico-491653) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 12:10:55.850902   66701 main.go:141] libmachine: (calico-491653) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 12:10:55.850916   66701 main.go:141] libmachine: (calico-491653) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 12:10:55.850928   66701 main.go:141] libmachine: (calico-491653) Creating domain...
	I0610 12:10:55.852095   66701 main.go:141] libmachine: (calico-491653) define libvirt domain using xml: 
	I0610 12:10:55.852131   66701 main.go:141] libmachine: (calico-491653) <domain type='kvm'>
	I0610 12:10:55.852144   66701 main.go:141] libmachine: (calico-491653)   <name>calico-491653</name>
	I0610 12:10:55.852151   66701 main.go:141] libmachine: (calico-491653)   <memory unit='MiB'>3072</memory>
	I0610 12:10:55.852160   66701 main.go:141] libmachine: (calico-491653)   <vcpu>2</vcpu>
	I0610 12:10:55.852167   66701 main.go:141] libmachine: (calico-491653)   <features>
	I0610 12:10:55.852178   66701 main.go:141] libmachine: (calico-491653)     <acpi/>
	I0610 12:10:55.852186   66701 main.go:141] libmachine: (calico-491653)     <apic/>
	I0610 12:10:55.852203   66701 main.go:141] libmachine: (calico-491653)     <pae/>
	I0610 12:10:55.852214   66701 main.go:141] libmachine: (calico-491653)     
	I0610 12:10:55.852225   66701 main.go:141] libmachine: (calico-491653)   </features>
	I0610 12:10:55.852232   66701 main.go:141] libmachine: (calico-491653)   <cpu mode='host-passthrough'>
	I0610 12:10:55.852256   66701 main.go:141] libmachine: (calico-491653)   
	I0610 12:10:55.852275   66701 main.go:141] libmachine: (calico-491653)   </cpu>
	I0610 12:10:55.852288   66701 main.go:141] libmachine: (calico-491653)   <os>
	I0610 12:10:55.852308   66701 main.go:141] libmachine: (calico-491653)     <type>hvm</type>
	I0610 12:10:55.852322   66701 main.go:141] libmachine: (calico-491653)     <boot dev='cdrom'/>
	I0610 12:10:55.852329   66701 main.go:141] libmachine: (calico-491653)     <boot dev='hd'/>
	I0610 12:10:55.852336   66701 main.go:141] libmachine: (calico-491653)     <bootmenu enable='no'/>
	I0610 12:10:55.852340   66701 main.go:141] libmachine: (calico-491653)   </os>
	I0610 12:10:55.852346   66701 main.go:141] libmachine: (calico-491653)   <devices>
	I0610 12:10:55.852352   66701 main.go:141] libmachine: (calico-491653)     <disk type='file' device='cdrom'>
	I0610 12:10:55.852364   66701 main.go:141] libmachine: (calico-491653)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/calico-491653/boot2docker.iso'/>
	I0610 12:10:55.852372   66701 main.go:141] libmachine: (calico-491653)       <target dev='hdc' bus='scsi'/>
	I0610 12:10:55.852378   66701 main.go:141] libmachine: (calico-491653)       <readonly/>
	I0610 12:10:55.852384   66701 main.go:141] libmachine: (calico-491653)     </disk>
	I0610 12:10:55.852392   66701 main.go:141] libmachine: (calico-491653)     <disk type='file' device='disk'>
	I0610 12:10:55.852404   66701 main.go:141] libmachine: (calico-491653)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 12:10:55.852418   66701 main.go:141] libmachine: (calico-491653)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/calico-491653/calico-491653.rawdisk'/>
	I0610 12:10:55.852429   66701 main.go:141] libmachine: (calico-491653)       <target dev='hda' bus='virtio'/>
	I0610 12:10:55.852437   66701 main.go:141] libmachine: (calico-491653)     </disk>
	I0610 12:10:55.852449   66701 main.go:141] libmachine: (calico-491653)     <interface type='network'>
	I0610 12:10:55.852460   66701 main.go:141] libmachine: (calico-491653)       <source network='mk-calico-491653'/>
	I0610 12:10:55.852470   66701 main.go:141] libmachine: (calico-491653)       <model type='virtio'/>
	I0610 12:10:55.852479   66701 main.go:141] libmachine: (calico-491653)     </interface>
	I0610 12:10:55.852489   66701 main.go:141] libmachine: (calico-491653)     <interface type='network'>
	I0610 12:10:55.852498   66701 main.go:141] libmachine: (calico-491653)       <source network='default'/>
	I0610 12:10:55.852506   66701 main.go:141] libmachine: (calico-491653)       <model type='virtio'/>
	I0610 12:10:55.852513   66701 main.go:141] libmachine: (calico-491653)     </interface>
	I0610 12:10:55.852524   66701 main.go:141] libmachine: (calico-491653)     <serial type='pty'>
	I0610 12:10:55.852537   66701 main.go:141] libmachine: (calico-491653)       <target port='0'/>
	I0610 12:10:55.852544   66701 main.go:141] libmachine: (calico-491653)     </serial>
	I0610 12:10:55.852556   66701 main.go:141] libmachine: (calico-491653)     <console type='pty'>
	I0610 12:10:55.852569   66701 main.go:141] libmachine: (calico-491653)       <target type='serial' port='0'/>
	I0610 12:10:55.852581   66701 main.go:141] libmachine: (calico-491653)     </console>
	I0610 12:10:55.852591   66701 main.go:141] libmachine: (calico-491653)     <rng model='virtio'>
	I0610 12:10:55.852601   66701 main.go:141] libmachine: (calico-491653)       <backend model='random'>/dev/random</backend>
	I0610 12:10:55.852609   66701 main.go:141] libmachine: (calico-491653)     </rng>
	I0610 12:10:55.852614   66701 main.go:141] libmachine: (calico-491653)     
	I0610 12:10:55.852624   66701 main.go:141] libmachine: (calico-491653)     
	I0610 12:10:55.852633   66701 main.go:141] libmachine: (calico-491653)   </devices>
	I0610 12:10:55.852643   66701 main.go:141] libmachine: (calico-491653) </domain>
	I0610 12:10:55.852653   66701 main.go:141] libmachine: (calico-491653) 
	I0610 12:10:55.857580   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:60:ba:62 in network default
	I0610 12:10:55.858210   66701 main.go:141] libmachine: (calico-491653) Ensuring networks are active...
	I0610 12:10:55.858232   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:10:55.858942   66701 main.go:141] libmachine: (calico-491653) Ensuring network default is active
	I0610 12:10:55.859455   66701 main.go:141] libmachine: (calico-491653) Ensuring network mk-calico-491653 is active
	I0610 12:10:55.860016   66701 main.go:141] libmachine: (calico-491653) Getting domain xml...
	I0610 12:10:55.860755   66701 main.go:141] libmachine: (calico-491653) Creating domain...
	I0610 12:10:57.143244   66701 main.go:141] libmachine: (calico-491653) Waiting to get IP...
	I0610 12:10:57.144312   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:10:57.144822   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:10:57.144860   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:57.144806   66724 retry.go:31] will retry after 188.82416ms: waiting for machine to come up
	I0610 12:10:57.335196   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:10:57.335740   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:10:57.335770   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:57.335689   66724 retry.go:31] will retry after 344.222811ms: waiting for machine to come up
	I0610 12:10:57.681448   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:10:57.681922   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:10:57.681957   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:57.681890   66724 retry.go:31] will retry after 399.020668ms: waiting for machine to come up
	I0610 12:10:58.082450   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:10:58.082877   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:10:58.082905   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:58.082833   66724 retry.go:31] will retry after 451.727275ms: waiting for machine to come up
	I0610 12:10:58.536488   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:10:58.537063   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:10:58.537096   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:58.537016   66724 retry.go:31] will retry after 525.748073ms: waiting for machine to come up
	I0610 12:10:59.064839   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:10:59.065317   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:10:59.065351   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:59.065278   66724 retry.go:31] will retry after 917.908681ms: waiting for machine to come up
	I0610 12:10:59.985418   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:10:59.985858   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:10:59.985880   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:10:59.985808   66724 retry.go:31] will retry after 800.027546ms: waiting for machine to come up
	I0610 12:10:57.916214   64909 pod_ready.go:102] pod "coredns-7db6d8ff4d-fsqqw" in "kube-system" namespace has status "Ready":"False"
	I0610 12:11:00.415078   64909 pod_ready.go:102] pod "coredns-7db6d8ff4d-fsqqw" in "kube-system" namespace has status "Ready":"False"
	I0610 12:11:02.915858   64909 pod_ready.go:102] pod "coredns-7db6d8ff4d-fsqqw" in "kube-system" namespace has status "Ready":"False"
	I0610 12:11:03.915493   64909 pod_ready.go:92] pod "coredns-7db6d8ff4d-fsqqw" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:03.915518   64909 pod_ready.go:81] duration metric: took 38.507604781s for pod "coredns-7db6d8ff4d-fsqqw" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:03.915530   64909 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-491653" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:03.922304   64909 pod_ready.go:92] pod "etcd-auto-491653" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:03.922329   64909 pod_ready.go:81] duration metric: took 6.792409ms for pod "etcd-auto-491653" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:03.922341   64909 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-491653" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:03.930016   64909 pod_ready.go:92] pod "kube-apiserver-auto-491653" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:03.930047   64909 pod_ready.go:81] duration metric: took 7.696859ms for pod "kube-apiserver-auto-491653" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:03.930060   64909 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-491653" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:03.936329   64909 pod_ready.go:92] pod "kube-controller-manager-auto-491653" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:03.936356   64909 pod_ready.go:81] duration metric: took 6.288513ms for pod "kube-controller-manager-auto-491653" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:03.936369   64909 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-hrfcs" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:03.941691   64909 pod_ready.go:92] pod "kube-proxy-hrfcs" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:03.941715   64909 pod_ready.go:81] duration metric: took 5.340019ms for pod "kube-proxy-hrfcs" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:03.941724   64909 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-491653" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:04.312580   64909 pod_ready.go:92] pod "kube-scheduler-auto-491653" in "kube-system" namespace has status "Ready":"True"
	I0610 12:11:04.312605   64909 pod_ready.go:81] duration metric: took 370.87502ms for pod "kube-scheduler-auto-491653" in "kube-system" namespace to be "Ready" ...
	I0610 12:11:04.312613   64909 pod_ready.go:38] duration metric: took 41.417049976s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:11:04.312627   64909 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:11:04.312670   64909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:11:04.329512   64909 api_server.go:72] duration metric: took 41.769929618s to wait for apiserver process to appear ...
	I0610 12:11:04.329542   64909 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:11:04.329566   64909 api_server.go:253] Checking apiserver healthz at https://192.168.61.87:8443/healthz ...
	I0610 12:11:04.333654   64909 api_server.go:279] https://192.168.61.87:8443/healthz returned 200:
	ok
	I0610 12:11:04.334633   64909 api_server.go:141] control plane version: v1.30.1
	I0610 12:11:04.334653   64909 api_server.go:131] duration metric: took 5.104842ms to wait for apiserver health ...
	I0610 12:11:04.334660   64909 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:11:04.517303   64909 system_pods.go:59] 7 kube-system pods found
	I0610 12:11:04.517344   64909 system_pods.go:61] "coredns-7db6d8ff4d-fsqqw" [e914476d-ab0e-49ac-9973-02c24b9e58ae] Running
	I0610 12:11:04.517353   64909 system_pods.go:61] "etcd-auto-491653" [d13cb710-3a91-48d1-8c37-e46a7c6da89e] Running
	I0610 12:11:04.517359   64909 system_pods.go:61] "kube-apiserver-auto-491653" [6729d726-47bf-4cbd-ad11-baf6a33f440a] Running
	I0610 12:11:04.517365   64909 system_pods.go:61] "kube-controller-manager-auto-491653" [7271ea10-14b4-4979-8d95-f33c00af8ada] Running
	I0610 12:11:04.517371   64909 system_pods.go:61] "kube-proxy-hrfcs" [8c9ca8be-7030-478b-a629-3796a8eadbee] Running
	I0610 12:11:04.517377   64909 system_pods.go:61] "kube-scheduler-auto-491653" [6dc7a1a8-4db3-47cb-976d-ef46257434fd] Running
	I0610 12:11:04.517382   64909 system_pods.go:61] "storage-provisioner" [0d955902-a00a-4a1d-bc31-5cd24779d460] Running
	I0610 12:11:04.517390   64909 system_pods.go:74] duration metric: took 182.723734ms to wait for pod list to return data ...
	I0610 12:11:04.517399   64909 default_sa.go:34] waiting for default service account to be created ...
	I0610 12:11:04.712181   64909 default_sa.go:45] found service account: "default"
	I0610 12:11:04.712214   64909 default_sa.go:55] duration metric: took 194.806986ms for default service account to be created ...
	I0610 12:11:04.712225   64909 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 12:11:04.915325   64909 system_pods.go:86] 7 kube-system pods found
	I0610 12:11:04.915357   64909 system_pods.go:89] "coredns-7db6d8ff4d-fsqqw" [e914476d-ab0e-49ac-9973-02c24b9e58ae] Running
	I0610 12:11:04.915362   64909 system_pods.go:89] "etcd-auto-491653" [d13cb710-3a91-48d1-8c37-e46a7c6da89e] Running
	I0610 12:11:04.915367   64909 system_pods.go:89] "kube-apiserver-auto-491653" [6729d726-47bf-4cbd-ad11-baf6a33f440a] Running
	I0610 12:11:04.915371   64909 system_pods.go:89] "kube-controller-manager-auto-491653" [7271ea10-14b4-4979-8d95-f33c00af8ada] Running
	I0610 12:11:04.915377   64909 system_pods.go:89] "kube-proxy-hrfcs" [8c9ca8be-7030-478b-a629-3796a8eadbee] Running
	I0610 12:11:04.915381   64909 system_pods.go:89] "kube-scheduler-auto-491653" [6dc7a1a8-4db3-47cb-976d-ef46257434fd] Running
	I0610 12:11:04.915385   64909 system_pods.go:89] "storage-provisioner" [0d955902-a00a-4a1d-bc31-5cd24779d460] Running
	I0610 12:11:04.915392   64909 system_pods.go:126] duration metric: took 203.16055ms to wait for k8s-apps to be running ...
	I0610 12:11:04.915401   64909 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:11:04.915453   64909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:11:04.931754   64909 system_svc.go:56] duration metric: took 16.339629ms WaitForService to wait for kubelet
	I0610 12:11:04.931794   64909 kubeadm.go:576] duration metric: took 42.372215782s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:11:04.931819   64909 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:11:05.113044   64909 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:11:05.113079   64909 node_conditions.go:123] node cpu capacity is 2
	I0610 12:11:05.113096   64909 node_conditions.go:105] duration metric: took 181.270376ms to run NodePressure ...
	I0610 12:11:05.113112   64909 start.go:240] waiting for startup goroutines ...
	I0610 12:11:05.113122   64909 start.go:245] waiting for cluster config update ...
	I0610 12:11:05.113135   64909 start.go:254] writing updated cluster config ...
	I0610 12:11:05.113498   64909 ssh_runner.go:195] Run: rm -f paused
	I0610 12:11:05.166964   64909 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 12:11:05.169204   64909 out.go:177] * Done! kubectl is now configured to use "auto-491653" cluster and "default" namespace by default
	I0610 12:11:00.787057   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:00.787468   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:11:00.787486   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:11:00.787432   66724 retry.go:31] will retry after 1.36445877s: waiting for machine to come up
	I0610 12:11:02.153842   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:02.154292   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:11:02.154320   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:11:02.154247   66724 retry.go:31] will retry after 1.286358273s: waiting for machine to come up
	I0610 12:11:03.442626   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:03.443155   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:11:03.443177   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:11:03.443115   66724 retry.go:31] will retry after 2.134942059s: waiting for machine to come up
	I0610 12:11:05.579881   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:05.580510   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:11:05.580539   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:11:05.580452   66724 retry.go:31] will retry after 1.765342427s: waiting for machine to come up
	I0610 12:11:07.348357   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:07.348905   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:11:07.348937   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:11:07.348849   66724 retry.go:31] will retry after 2.94690156s: waiting for machine to come up
	I0610 12:11:10.298832   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:10.299271   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:11:10.299293   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:11:10.299224   66724 retry.go:31] will retry after 3.334932084s: waiting for machine to come up
	I0610 12:11:13.637997   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:13.638474   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find current IP address of domain calico-491653 in network mk-calico-491653
	I0610 12:11:13.638525   66701 main.go:141] libmachine: (calico-491653) DBG | I0610 12:11:13.638447   66724 retry.go:31] will retry after 5.508632811s: waiting for machine to come up
	I0610 12:11:19.151019   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.151640   66701 main.go:141] libmachine: (calico-491653) Found IP for machine: 192.168.72.179
	I0610 12:11:19.151667   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has current primary IP address 192.168.72.179 and MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.151676   66701 main.go:141] libmachine: (calico-491653) Reserving static IP address...
	I0610 12:11:19.152134   66701 main.go:141] libmachine: (calico-491653) DBG | unable to find host DHCP lease matching {name: "calico-491653", mac: "52:54:00:33:40:d8", ip: "192.168.72.179"} in network mk-calico-491653
	I0610 12:11:19.237938   66701 main.go:141] libmachine: (calico-491653) Reserved static IP address: 192.168.72.179
	I0610 12:11:19.238003   66701 main.go:141] libmachine: (calico-491653) Waiting for SSH to be available...
	I0610 12:11:19.238013   66701 main.go:141] libmachine: (calico-491653) DBG | Getting to WaitForSSH function...
	I0610 12:11:19.241081   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.241540   66701 main.go:141] libmachine: (calico-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:40:d8", ip: ""} in network mk-calico-491653: {Iface:virbr3 ExpiryTime:2024-06-10 13:11:09 +0000 UTC Type:0 Mac:52:54:00:33:40:d8 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:33:40:d8}
	I0610 12:11:19.241572   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined IP address 192.168.72.179 and MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.241780   66701 main.go:141] libmachine: (calico-491653) DBG | Using SSH client type: external
	I0610 12:11:19.241811   66701 main.go:141] libmachine: (calico-491653) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/calico-491653/id_rsa (-rw-------)
	I0610 12:11:19.241844   66701 main.go:141] libmachine: (calico-491653) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/calico-491653/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 12:11:19.241863   66701 main.go:141] libmachine: (calico-491653) DBG | About to run SSH command:
	I0610 12:11:19.241879   66701 main.go:141] libmachine: (calico-491653) DBG | exit 0
	I0610 12:11:19.369577   66701 main.go:141] libmachine: (calico-491653) DBG | SSH cmd err, output: <nil>: 
	I0610 12:11:19.369900   66701 main.go:141] libmachine: (calico-491653) KVM machine creation complete!
	I0610 12:11:19.370249   66701 main.go:141] libmachine: (calico-491653) Calling .GetConfigRaw
	I0610 12:11:19.370812   66701 main.go:141] libmachine: (calico-491653) Calling .DriverName
	I0610 12:11:19.371020   66701 main.go:141] libmachine: (calico-491653) Calling .DriverName
	I0610 12:11:19.371196   66701 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 12:11:19.371214   66701 main.go:141] libmachine: (calico-491653) Calling .GetState
	I0610 12:11:19.372727   66701 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 12:11:19.372745   66701 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 12:11:19.372752   66701 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 12:11:19.372761   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHHostname
	I0610 12:11:19.375542   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.375904   66701 main.go:141] libmachine: (calico-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:40:d8", ip: ""} in network mk-calico-491653: {Iface:virbr3 ExpiryTime:2024-06-10 13:11:09 +0000 UTC Type:0 Mac:52:54:00:33:40:d8 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-491653 Clientid:01:52:54:00:33:40:d8}
	I0610 12:11:19.375933   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined IP address 192.168.72.179 and MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.376068   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHPort
	I0610 12:11:19.376266   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHKeyPath
	I0610 12:11:19.376534   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHKeyPath
	I0610 12:11:19.376708   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHUsername
	I0610 12:11:19.376865   66701 main.go:141] libmachine: Using SSH client type: native
	I0610 12:11:19.377174   66701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0610 12:11:19.377192   66701 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 12:11:19.481347   66701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:11:19.481375   66701 main.go:141] libmachine: Detecting the provisioner...
	I0610 12:11:19.481383   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHHostname
	I0610 12:11:19.484210   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.484633   66701 main.go:141] libmachine: (calico-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:40:d8", ip: ""} in network mk-calico-491653: {Iface:virbr3 ExpiryTime:2024-06-10 13:11:09 +0000 UTC Type:0 Mac:52:54:00:33:40:d8 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-491653 Clientid:01:52:54:00:33:40:d8}
	I0610 12:11:19.484660   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined IP address 192.168.72.179 and MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.484832   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHPort
	I0610 12:11:19.485049   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHKeyPath
	I0610 12:11:19.485401   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHKeyPath
	I0610 12:11:19.485623   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHUsername
	I0610 12:11:19.485820   66701 main.go:141] libmachine: Using SSH client type: native
	I0610 12:11:19.486060   66701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0610 12:11:19.486074   66701 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 12:11:19.590216   66701 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 12:11:19.590305   66701 main.go:141] libmachine: found compatible host: buildroot
	I0610 12:11:19.590319   66701 main.go:141] libmachine: Provisioning with buildroot...
	I0610 12:11:19.590330   66701 main.go:141] libmachine: (calico-491653) Calling .GetMachineName
	I0610 12:11:19.590615   66701 buildroot.go:166] provisioning hostname "calico-491653"
	I0610 12:11:19.590640   66701 main.go:141] libmachine: (calico-491653) Calling .GetMachineName
	I0610 12:11:19.590842   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHHostname
	I0610 12:11:19.593990   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.594466   66701 main.go:141] libmachine: (calico-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:40:d8", ip: ""} in network mk-calico-491653: {Iface:virbr3 ExpiryTime:2024-06-10 13:11:09 +0000 UTC Type:0 Mac:52:54:00:33:40:d8 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-491653 Clientid:01:52:54:00:33:40:d8}
	I0610 12:11:19.594497   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined IP address 192.168.72.179 and MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.594824   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHPort
	I0610 12:11:19.595092   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHKeyPath
	I0610 12:11:19.595304   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHKeyPath
	I0610 12:11:19.595452   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHUsername
	I0610 12:11:19.595663   66701 main.go:141] libmachine: Using SSH client type: native
	I0610 12:11:19.595896   66701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0610 12:11:19.595915   66701 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-491653 && echo "calico-491653" | sudo tee /etc/hostname
	I0610 12:11:19.717409   66701 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-491653
	
	I0610 12:11:19.717442   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHHostname
	I0610 12:11:19.720408   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.720735   66701 main.go:141] libmachine: (calico-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:40:d8", ip: ""} in network mk-calico-491653: {Iface:virbr3 ExpiryTime:2024-06-10 13:11:09 +0000 UTC Type:0 Mac:52:54:00:33:40:d8 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-491653 Clientid:01:52:54:00:33:40:d8}
	I0610 12:11:19.720764   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined IP address 192.168.72.179 and MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.721044   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHPort
	I0610 12:11:19.721297   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHKeyPath
	I0610 12:11:19.721497   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHKeyPath
	I0610 12:11:19.721663   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHUsername
	I0610 12:11:19.721824   66701 main.go:141] libmachine: Using SSH client type: native
	I0610 12:11:19.722010   66701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0610 12:11:19.722037   66701 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-491653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-491653/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-491653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:11:19.833980   66701 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:11:19.834005   66701 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 12:11:19.834048   66701 buildroot.go:174] setting up certificates
	I0610 12:11:19.834068   66701 provision.go:84] configureAuth start
	I0610 12:11:19.834079   66701 main.go:141] libmachine: (calico-491653) Calling .GetMachineName
	I0610 12:11:19.834383   66701 main.go:141] libmachine: (calico-491653) Calling .GetIP
	I0610 12:11:19.837595   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.837972   66701 main.go:141] libmachine: (calico-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:40:d8", ip: ""} in network mk-calico-491653: {Iface:virbr3 ExpiryTime:2024-06-10 13:11:09 +0000 UTC Type:0 Mac:52:54:00:33:40:d8 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-491653 Clientid:01:52:54:00:33:40:d8}
	I0610 12:11:19.838001   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined IP address 192.168.72.179 and MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.838146   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHHostname
	I0610 12:11:19.840516   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.840835   66701 main.go:141] libmachine: (calico-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:40:d8", ip: ""} in network mk-calico-491653: {Iface:virbr3 ExpiryTime:2024-06-10 13:11:09 +0000 UTC Type:0 Mac:52:54:00:33:40:d8 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-491653 Clientid:01:52:54:00:33:40:d8}
	I0610 12:11:19.840856   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined IP address 192.168.72.179 and MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.840990   66701 provision.go:143] copyHostCerts
	I0610 12:11:19.841057   66701 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 12:11:19.841070   66701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 12:11:19.841158   66701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 12:11:19.841287   66701 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 12:11:19.841299   66701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 12:11:19.841340   66701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 12:11:19.841434   66701 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 12:11:19.841443   66701 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 12:11:19.841477   66701 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 12:11:19.841533   66701 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.calico-491653 san=[127.0.0.1 192.168.72.179 calico-491653 localhost minikube]
	I0610 12:11:19.969795   66701 provision.go:177] copyRemoteCerts
	I0610 12:11:19.969851   66701 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:11:19.969874   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHHostname
	I0610 12:11:19.973126   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.973590   66701 main.go:141] libmachine: (calico-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:40:d8", ip: ""} in network mk-calico-491653: {Iface:virbr3 ExpiryTime:2024-06-10 13:11:09 +0000 UTC Type:0 Mac:52:54:00:33:40:d8 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-491653 Clientid:01:52:54:00:33:40:d8}
	I0610 12:11:19.973618   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined IP address 192.168.72.179 and MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:19.973795   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHPort
	I0610 12:11:19.974023   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHKeyPath
	I0610 12:11:19.974211   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHUsername
	I0610 12:11:19.974373   66701 sshutil.go:53] new ssh client: &{IP:192.168.72.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/calico-491653/id_rsa Username:docker}
	I0610 12:11:20.061114   66701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:11:20.086651   66701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 12:11:20.112474   66701 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 12:11:20.141765   66701 provision.go:87] duration metric: took 307.680809ms to configureAuth
	I0610 12:11:20.141800   66701 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:11:20.142017   66701 config.go:182] Loaded profile config "calico-491653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:11:20.142112   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHHostname
	I0610 12:11:20.144965   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:20.145406   66701 main.go:141] libmachine: (calico-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:40:d8", ip: ""} in network mk-calico-491653: {Iface:virbr3 ExpiryTime:2024-06-10 13:11:09 +0000 UTC Type:0 Mac:52:54:00:33:40:d8 Iaid: IPaddr:192.168.72.179 Prefix:24 Hostname:calico-491653 Clientid:01:52:54:00:33:40:d8}
	I0610 12:11:20.145434   66701 main.go:141] libmachine: (calico-491653) DBG | domain calico-491653 has defined IP address 192.168.72.179 and MAC address 52:54:00:33:40:d8 in network mk-calico-491653
	I0610 12:11:20.145620   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHPort
	I0610 12:11:20.145815   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHKeyPath
	I0610 12:11:20.145993   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHKeyPath
	I0610 12:11:20.146171   66701 main.go:141] libmachine: (calico-491653) Calling .GetSSHUsername
	I0610 12:11:20.146389   66701 main.go:141] libmachine: Using SSH client type: native
	I0610 12:11:20.146555   66701 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.72.179 22 <nil> <nil>}
	I0610 12:11:20.146569   66701 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.381721413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021482381678899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59bafff2-8dda-4dda-b2f7-a443206b1323 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.382404191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebfbbbdd-fb7a-4def-97e2-38eee01fd747 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.382522788Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebfbbbdd-fb7a-4def-97e2-38eee01fd747 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.383015157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e665a2fb5aecc808097f2fc05d79904e306ff78e8236dae6c9f7e09bce5e7d10,PodSandboxId:a9d3e9e4ec0e2b59767845bed3dd6c145cd768d55411c2f28d5bf26e499a28db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020938827005760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df0a38c-5e91-4b10-a303-c4eff9545669,},Annotations:map[string]string{io.kubernetes.container.hash: f3f5f7e9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba3974695bcb24d4b2cc8663b2aa027f6b410c22fea995bdcb40dfbd617433,PodSandboxId:9bb6ddaadc05193b6f50efff54d843ef10a59c3c2beed571999521b753dc71f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020938113799597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh756,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cbf3d6-c149-4ae1-84d3-6df6a53ea091,},Annotations:map[string]string{io.kubernetes.container.hash: 17aa3131,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce134635118f7b2df18802cbc00fa342ccd3073a3443738aa4756dca35584e82,PodSandboxId:231539d0028b33c319a6b6db3544bbbea03be1eba9e25caf0c3a64056d67f4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937722768597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fgtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d948ca-122a-4042-8371-8a9422c187bc,},Annotations:map[string]string{io.kubernetes.container.hash: e063c420,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede65a395c6808abbdc027050debd911c62f6c6caf8a06f602eede88005380d3,PodSandboxId:bf214ddcb42cc130013624d4d24f34997d3174e052fe2e1d685309419830855b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937588701477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fg8xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e91ae09c-8821-4843-8c0d-
ea734433c213,},Annotations:map[string]string{io.kubernetes.container.hash: 6835c88c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0d5ff9212eb4d5532fe9dc9affa7331ae4ff1f5f5eb3a2e8e42b0133c616a70,PodSandboxId:4215d285111a70838d992640373dfb8d016f1e9d2bd7192ab9046d8b56fca700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:171802091
7787872621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d424dbcac48429c7d039d6107e300dc3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746761e0904148694c14f03f97d46a2d2a04dd5aa50fc3f71fc632a115b40a21,PodSandboxId:b1d8bca51772f4492d8104060796b139c6eb38d6620714327699e98031b691fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171
8020917774776034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c88a335fe375918bcfd46be4831435f7,},Annotations:map[string]string{io.kubernetes.container.hash: e653e9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e3a8adfbcb60a4cf30c281f0c60f9d7c3bff06b1cf111b2cc27d0692eebf5,PodSandboxId:8406c2ed5bf34cde9ed1c5ec05ae7753f39aefdede064fff143e68299e93dada,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020917740984045,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17235c9a9d5b1f2ccf38065ada94e3,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d832964f75572ba827c846938c023588ee720568af6f4209d8669bbbf714be81,PodSandboxId:1c90a6e342a603712e161be1f0f35d7f9b90848253ff2c30f0a613ddb819e8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020917693294419,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afe3822d5bbfe48baace364462a72d7,},Annotations:map[string]string{io.kubernetes.container.hash: ebaede52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebfbbbdd-fb7a-4def-97e2-38eee01fd747 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.431581003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4cad398-5a9f-4bbe-8357-c76a2814c815 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.431704951Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4cad398-5a9f-4bbe-8357-c76a2814c815 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.433826537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c57e916-65d5-4d72-a2b5-0e2de968d861 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.434434472Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021482434400829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c57e916-65d5-4d72-a2b5-0e2de968d861 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.435427537Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=396ef959-c644-4e99-9fca-2687c3806c76 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.435545593Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=396ef959-c644-4e99-9fca-2687c3806c76 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.435823801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e665a2fb5aecc808097f2fc05d79904e306ff78e8236dae6c9f7e09bce5e7d10,PodSandboxId:a9d3e9e4ec0e2b59767845bed3dd6c145cd768d55411c2f28d5bf26e499a28db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020938827005760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df0a38c-5e91-4b10-a303-c4eff9545669,},Annotations:map[string]string{io.kubernetes.container.hash: f3f5f7e9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba3974695bcb24d4b2cc8663b2aa027f6b410c22fea995bdcb40dfbd617433,PodSandboxId:9bb6ddaadc05193b6f50efff54d843ef10a59c3c2beed571999521b753dc71f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020938113799597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh756,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cbf3d6-c149-4ae1-84d3-6df6a53ea091,},Annotations:map[string]string{io.kubernetes.container.hash: 17aa3131,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce134635118f7b2df18802cbc00fa342ccd3073a3443738aa4756dca35584e82,PodSandboxId:231539d0028b33c319a6b6db3544bbbea03be1eba9e25caf0c3a64056d67f4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937722768597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fgtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d948ca-122a-4042-8371-8a9422c187bc,},Annotations:map[string]string{io.kubernetes.container.hash: e063c420,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede65a395c6808abbdc027050debd911c62f6c6caf8a06f602eede88005380d3,PodSandboxId:bf214ddcb42cc130013624d4d24f34997d3174e052fe2e1d685309419830855b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937588701477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fg8xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e91ae09c-8821-4843-8c0d-
ea734433c213,},Annotations:map[string]string{io.kubernetes.container.hash: 6835c88c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0d5ff9212eb4d5532fe9dc9affa7331ae4ff1f5f5eb3a2e8e42b0133c616a70,PodSandboxId:4215d285111a70838d992640373dfb8d016f1e9d2bd7192ab9046d8b56fca700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:171802091
7787872621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d424dbcac48429c7d039d6107e300dc3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746761e0904148694c14f03f97d46a2d2a04dd5aa50fc3f71fc632a115b40a21,PodSandboxId:b1d8bca51772f4492d8104060796b139c6eb38d6620714327699e98031b691fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171
8020917774776034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c88a335fe375918bcfd46be4831435f7,},Annotations:map[string]string{io.kubernetes.container.hash: e653e9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e3a8adfbcb60a4cf30c281f0c60f9d7c3bff06b1cf111b2cc27d0692eebf5,PodSandboxId:8406c2ed5bf34cde9ed1c5ec05ae7753f39aefdede064fff143e68299e93dada,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020917740984045,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17235c9a9d5b1f2ccf38065ada94e3,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d832964f75572ba827c846938c023588ee720568af6f4209d8669bbbf714be81,PodSandboxId:1c90a6e342a603712e161be1f0f35d7f9b90848253ff2c30f0a613ddb819e8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020917693294419,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afe3822d5bbfe48baace364462a72d7,},Annotations:map[string]string{io.kubernetes.container.hash: ebaede52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=396ef959-c644-4e99-9fca-2687c3806c76 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.482381165Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9f5977d-2d30-43da-a3b3-953c82068fab name=/runtime.v1.RuntimeService/Version
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.482516608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9f5977d-2d30-43da-a3b3-953c82068fab name=/runtime.v1.RuntimeService/Version
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.483995623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afd379da-ae1a-48c7-b74f-b34ac6463193 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.484407949Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021482484386524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afd379da-ae1a-48c7-b74f-b34ac6463193 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.485059437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0d30ae8-95b6-4ba5-9e11-ba243d173b3f name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.485111491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0d30ae8-95b6-4ba5-9e11-ba243d173b3f name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.485315927Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e665a2fb5aecc808097f2fc05d79904e306ff78e8236dae6c9f7e09bce5e7d10,PodSandboxId:a9d3e9e4ec0e2b59767845bed3dd6c145cd768d55411c2f28d5bf26e499a28db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020938827005760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df0a38c-5e91-4b10-a303-c4eff9545669,},Annotations:map[string]string{io.kubernetes.container.hash: f3f5f7e9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba3974695bcb24d4b2cc8663b2aa027f6b410c22fea995bdcb40dfbd617433,PodSandboxId:9bb6ddaadc05193b6f50efff54d843ef10a59c3c2beed571999521b753dc71f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020938113799597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh756,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cbf3d6-c149-4ae1-84d3-6df6a53ea091,},Annotations:map[string]string{io.kubernetes.container.hash: 17aa3131,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce134635118f7b2df18802cbc00fa342ccd3073a3443738aa4756dca35584e82,PodSandboxId:231539d0028b33c319a6b6db3544bbbea03be1eba9e25caf0c3a64056d67f4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937722768597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fgtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d948ca-122a-4042-8371-8a9422c187bc,},Annotations:map[string]string{io.kubernetes.container.hash: e063c420,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede65a395c6808abbdc027050debd911c62f6c6caf8a06f602eede88005380d3,PodSandboxId:bf214ddcb42cc130013624d4d24f34997d3174e052fe2e1d685309419830855b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937588701477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fg8xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e91ae09c-8821-4843-8c0d-
ea734433c213,},Annotations:map[string]string{io.kubernetes.container.hash: 6835c88c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0d5ff9212eb4d5532fe9dc9affa7331ae4ff1f5f5eb3a2e8e42b0133c616a70,PodSandboxId:4215d285111a70838d992640373dfb8d016f1e9d2bd7192ab9046d8b56fca700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:171802091
7787872621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d424dbcac48429c7d039d6107e300dc3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746761e0904148694c14f03f97d46a2d2a04dd5aa50fc3f71fc632a115b40a21,PodSandboxId:b1d8bca51772f4492d8104060796b139c6eb38d6620714327699e98031b691fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171
8020917774776034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c88a335fe375918bcfd46be4831435f7,},Annotations:map[string]string{io.kubernetes.container.hash: e653e9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e3a8adfbcb60a4cf30c281f0c60f9d7c3bff06b1cf111b2cc27d0692eebf5,PodSandboxId:8406c2ed5bf34cde9ed1c5ec05ae7753f39aefdede064fff143e68299e93dada,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020917740984045,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17235c9a9d5b1f2ccf38065ada94e3,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d832964f75572ba827c846938c023588ee720568af6f4209d8669bbbf714be81,PodSandboxId:1c90a6e342a603712e161be1f0f35d7f9b90848253ff2c30f0a613ddb819e8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020917693294419,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afe3822d5bbfe48baace364462a72d7,},Annotations:map[string]string{io.kubernetes.container.hash: ebaede52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0d30ae8-95b6-4ba5-9e11-ba243d173b3f name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.530778523Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd67cd07-6d0f-485c-b3df-5fa97dcedc9b name=/runtime.v1.RuntimeService/Version
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.530857276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd67cd07-6d0f-485c-b3df-5fa97dcedc9b name=/runtime.v1.RuntimeService/Version
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.532783278Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d28e385e-0734-4b78-9efc-42a34d5d0759 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.533178238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021482533157089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d28e385e-0734-4b78-9efc-42a34d5d0759 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.534194254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de19154b-439f-4179-8233-c741422e3e3e name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.534257953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de19154b-439f-4179-8233-c741422e3e3e name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:11:22 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:11:22.534676555Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e665a2fb5aecc808097f2fc05d79904e306ff78e8236dae6c9f7e09bce5e7d10,PodSandboxId:a9d3e9e4ec0e2b59767845bed3dd6c145cd768d55411c2f28d5bf26e499a28db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020938827005760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df0a38c-5e91-4b10-a303-c4eff9545669,},Annotations:map[string]string{io.kubernetes.container.hash: f3f5f7e9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba3974695bcb24d4b2cc8663b2aa027f6b410c22fea995bdcb40dfbd617433,PodSandboxId:9bb6ddaadc05193b6f50efff54d843ef10a59c3c2beed571999521b753dc71f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020938113799597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh756,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cbf3d6-c149-4ae1-84d3-6df6a53ea091,},Annotations:map[string]string{io.kubernetes.container.hash: 17aa3131,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce134635118f7b2df18802cbc00fa342ccd3073a3443738aa4756dca35584e82,PodSandboxId:231539d0028b33c319a6b6db3544bbbea03be1eba9e25caf0c3a64056d67f4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937722768597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fgtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d948ca-122a-4042-8371-8a9422c187bc,},Annotations:map[string]string{io.kubernetes.container.hash: e063c420,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede65a395c6808abbdc027050debd911c62f6c6caf8a06f602eede88005380d3,PodSandboxId:bf214ddcb42cc130013624d4d24f34997d3174e052fe2e1d685309419830855b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937588701477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fg8xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e91ae09c-8821-4843-8c0d-
ea734433c213,},Annotations:map[string]string{io.kubernetes.container.hash: 6835c88c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0d5ff9212eb4d5532fe9dc9affa7331ae4ff1f5f5eb3a2e8e42b0133c616a70,PodSandboxId:4215d285111a70838d992640373dfb8d016f1e9d2bd7192ab9046d8b56fca700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:171802091
7787872621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d424dbcac48429c7d039d6107e300dc3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746761e0904148694c14f03f97d46a2d2a04dd5aa50fc3f71fc632a115b40a21,PodSandboxId:b1d8bca51772f4492d8104060796b139c6eb38d6620714327699e98031b691fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171
8020917774776034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c88a335fe375918bcfd46be4831435f7,},Annotations:map[string]string{io.kubernetes.container.hash: e653e9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e3a8adfbcb60a4cf30c281f0c60f9d7c3bff06b1cf111b2cc27d0692eebf5,PodSandboxId:8406c2ed5bf34cde9ed1c5ec05ae7753f39aefdede064fff143e68299e93dada,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020917740984045,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17235c9a9d5b1f2ccf38065ada94e3,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d832964f75572ba827c846938c023588ee720568af6f4209d8669bbbf714be81,PodSandboxId:1c90a6e342a603712e161be1f0f35d7f9b90848253ff2c30f0a613ddb819e8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020917693294419,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afe3822d5bbfe48baace364462a72d7,},Annotations:map[string]string{io.kubernetes.container.hash: ebaede52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=de19154b-439f-4179-8233-c741422e3e3e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e665a2fb5aecc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a9d3e9e4ec0e2       storage-provisioner
	26ba3974695bc       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   9 minutes ago       Running             kube-proxy                0                   9bb6ddaadc051       kube-proxy-wh756
	ce134635118f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   231539d0028b3       coredns-7db6d8ff4d-5fgtk
	ede65a395c680       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   bf214ddcb42cc       coredns-7db6d8ff4d-fg8xx
	c0d5ff9212eb4       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   9 minutes ago       Running             kube-controller-manager   2                   4215d285111a7       kube-controller-manager-default-k8s-diff-port-281114
	746761e090414       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   b1d8bca51772f       etcd-default-k8s-diff-port-281114
	622e3a8adfbcb       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   9 minutes ago       Running             kube-scheduler            2                   8406c2ed5bf34       kube-scheduler-default-k8s-diff-port-281114
	d832964f75572       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   9 minutes ago       Running             kube-apiserver            2                   1c90a6e342a60       kube-apiserver-default-k8s-diff-port-281114
	
	
	==> coredns [ce134635118f7b2df18802cbc00fa342ccd3073a3443738aa4756dca35584e82] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ede65a395c6808abbdc027050debd911c62f6c6caf8a06f602eede88005380d3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-281114
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-281114
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=default-k8s-diff-port-281114
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T12_02_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 12:02:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-281114
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:11:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 12:07:29 +0000   Mon, 10 Jun 2024 12:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 12:07:29 +0000   Mon, 10 Jun 2024 12:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 12:07:29 +0000   Mon, 10 Jun 2024 12:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 12:07:29 +0000   Mon, 10 Jun 2024 12:02:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.222
	  Hostname:    default-k8s-diff-port-281114
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fe69d065bac483cbbf95ca19ccd8066
	  System UUID:                7fe69d06-5bac-483c-bbf9-5ca19ccd8066
	  Boot ID:                    6463d4c2-e1b4-4d25-8caf-0d032d5e18c0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5fgtk                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-7db6d8ff4d-fg8xx                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-default-k8s-diff-port-281114                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-281114             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-281114    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-wh756                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-281114             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-569cc877fc-j58s9                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node default-k8s-diff-port-281114 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node default-k8s-diff-port-281114 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node default-k8s-diff-port-281114 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m7s   node-controller  Node default-k8s-diff-port-281114 event: Registered Node default-k8s-diff-port-281114 in Controller
	
	
	==> dmesg <==
	[  +0.040033] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.610449] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.841111] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.542956] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.780331] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.061871] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061073] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.169738] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.146134] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.287034] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[Jun10 11:57] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +1.939897] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +0.070518] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.515579] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.048719] kauditd_printk_skb: 50 callbacks suppressed
	[  +7.141243] kauditd_printk_skb: 27 callbacks suppressed
	[Jun10 12:01] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.844952] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[Jun10 12:02] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.589189] systemd-fstab-generator[3894]: Ignoring "noauto" option for root device
	[ +14.360359] systemd-fstab-generator[4106]: Ignoring "noauto" option for root device
	[  +0.035003] kauditd_printk_skb: 14 callbacks suppressed
	[Jun10 12:03] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [746761e0904148694c14f03f97d46a2d2a04dd5aa50fc3f71fc632a115b40a21] <==
	{"level":"info","ts":"2024-06-10T12:01:58.883588Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.222:2379"}
	{"level":"info","ts":"2024-06-10T12:09:31.542173Z","caller":"traceutil/trace.go:171","msg":"trace[879242166] linearizableReadLoop","detail":"{readStateIndex:902; appliedIndex:901; }","duration":"164.779364ms","start":"2024-06-10T12:09:31.377337Z","end":"2024-06-10T12:09:31.542117Z","steps":["trace[879242166] 'read index received'  (duration: 164.62084ms)","trace[879242166] 'applied index is now lower than readState.Index'  (duration: 157.849µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T12:09:31.54237Z","caller":"traceutil/trace.go:171","msg":"trace[12788680] transaction","detail":"{read_only:false; response_revision:802; number_of_response:1; }","duration":"299.447364ms","start":"2024-06-10T12:09:31.242897Z","end":"2024-06-10T12:09:31.542345Z","steps":["trace[12788680] 'process raft request'  (duration: 299.097193ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:09:31.54263Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.104785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T12:09:31.543017Z","caller":"traceutil/trace.go:171","msg":"trace[1534197008] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:802; }","duration":"165.69533ms","start":"2024-06-10T12:09:31.377307Z","end":"2024-06-10T12:09:31.543002Z","steps":["trace[1534197008] 'agreement among raft nodes before linearized reading'  (duration: 165.104684ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:09:57.547381Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.426186ms","expected-duration":"100ms","prefix":"","request":"header:<ID:720734278757649624 > lease_revoke:<id:0a00900206cc0c87>","response":"size:28"}
	{"level":"info","ts":"2024-06-10T12:09:57.985599Z","caller":"traceutil/trace.go:171","msg":"trace[1759716622] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"297.420781ms","start":"2024-06-10T12:09:57.688135Z","end":"2024-06-10T12:09:57.985556Z","steps":["trace[1759716622] 'process raft request'  (duration: 297.201896ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:09:58.44744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.189057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T12:09:58.448257Z","caller":"traceutil/trace.go:171","msg":"trace[345382769] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:823; }","duration":"268.049345ms","start":"2024-06-10T12:09:58.180178Z","end":"2024-06-10T12:09:58.448227Z","steps":["trace[345382769] 'range keys from in-memory index tree'  (duration: 267.119653ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:10:22.24777Z","caller":"traceutil/trace.go:171","msg":"trace[1826220013] linearizableReadLoop","detail":"{readStateIndex:952; appliedIndex:951; }","duration":"110.482562ms","start":"2024-06-10T12:10:22.137273Z","end":"2024-06-10T12:10:22.247756Z","steps":["trace[1826220013] 'read index received'  (duration: 110.282306ms)","trace[1826220013] 'applied index is now lower than readState.Index'  (duration: 199.721µs)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T12:10:22.248014Z","caller":"traceutil/trace.go:171","msg":"trace[195087448] transaction","detail":"{read_only:false; response_revision:842; number_of_response:1; }","duration":"128.76193ms","start":"2024-06-10T12:10:22.119242Z","end":"2024-06-10T12:10:22.248004Z","steps":["trace[195087448] 'process raft request'  (duration: 128.382681ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:10:22.248289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.008882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.222\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-06-10T12:10:22.249Z","caller":"traceutil/trace.go:171","msg":"trace[1164094067] range","detail":"{range_begin:/registry/masterleases/192.168.50.222; range_end:; response_count:1; response_revision:842; }","duration":"111.768509ms","start":"2024-06-10T12:10:22.137217Z","end":"2024-06-10T12:10:22.248986Z","steps":["trace[1164094067] 'agreement among raft nodes before linearized reading'  (duration: 110.945335ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:10:22.485043Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.098909ms","expected-duration":"100ms","prefix":"","request":"header:<ID:720734278757649744 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:0a00900206cc0d4f>","response":"size:40"}
	{"level":"info","ts":"2024-06-10T12:10:22.48512Z","caller":"traceutil/trace.go:171","msg":"trace[1690147151] linearizableReadLoop","detail":"{readStateIndex:953; appliedIndex:952; }","duration":"140.405788ms","start":"2024-06-10T12:10:22.344703Z","end":"2024-06-10T12:10:22.485109Z","steps":["trace[1690147151] 'read index received'  (duration: 15.183432ms)","trace[1690147151] 'applied index is now lower than readState.Index'  (duration: 125.221596ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T12:10:22.485182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.470239ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T12:10:22.485197Z","caller":"traceutil/trace.go:171","msg":"trace[1347775098] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:842; }","duration":"140.511245ms","start":"2024-06-10T12:10:22.344679Z","end":"2024-06-10T12:10:22.485191Z","steps":["trace[1347775098] 'agreement among raft nodes before linearized reading'  (duration: 140.463231ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:10:40.39625Z","caller":"traceutil/trace.go:171","msg":"trace[463860938] transaction","detail":"{read_only:false; response_revision:856; number_of_response:1; }","duration":"106.063669ms","start":"2024-06-10T12:10:40.290141Z","end":"2024-06-10T12:10:40.396205Z","steps":["trace[463860938] 'process raft request'  (duration: 105.63763ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:10:40.671179Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.763259ms","expected-duration":"100ms","prefix":"","request":"header:<ID:720734278757649824 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:855 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-10T12:10:40.671422Z","caller":"traceutil/trace.go:171","msg":"trace[572913662] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"265.343979ms","start":"2024-06-10T12:10:40.406059Z","end":"2024-06-10T12:10:40.671403Z","steps":["trace[572913662] 'process raft request'  (duration: 119.268989ms)","trace[572913662] 'compare'  (duration: 144.660164ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T12:10:40.671889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.22904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T12:10:40.671944Z","caller":"traceutil/trace.go:171","msg":"trace[124069812] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:857; }","duration":"110.296027ms","start":"2024-06-10T12:10:40.561635Z","end":"2024-06-10T12:10:40.671931Z","steps":["trace[124069812] 'agreement among raft nodes before linearized reading'  (duration: 110.196759ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:10:40.671776Z","caller":"traceutil/trace.go:171","msg":"trace[1968203184] linearizableReadLoop","detail":"{readStateIndex:971; appliedIndex:970; }","duration":"109.708919ms","start":"2024-06-10T12:10:40.561639Z","end":"2024-06-10T12:10:40.671348Z","steps":["trace[1968203184] 'read index received'  (duration: 73.653µs)","trace[1968203184] 'applied index is now lower than readState.Index'  (duration: 109.633603ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T12:10:41.314818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.194237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T12:10:41.314972Z","caller":"traceutil/trace.go:171","msg":"trace[1563114442] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:857; }","duration":"136.428358ms","start":"2024-06-10T12:10:41.178527Z","end":"2024-06-10T12:10:41.314956Z","steps":["trace[1563114442] 'range keys from in-memory index tree'  (duration: 136.129657ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:11:23 up 14 min,  0 users,  load average: 0.25, 0.18, 0.16
	Linux default-k8s-diff-port-281114 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d832964f75572ba827c846938c023588ee720568af6f4209d8669bbbf714be81] <==
	I0610 12:05:19.478010       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:07:00.435896       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:07:00.436016       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0610 12:07:01.437114       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:07:01.437175       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:07:01.437184       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:07:01.437278       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:07:01.437384       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:07:01.438593       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:08:01.437753       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:08:01.438018       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:08:01.438108       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:08:01.438872       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:08:01.439031       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:08:01.439280       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:10:01.438622       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:10:01.438886       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:10:01.438919       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:10:01.440154       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:10:01.440260       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:10:01.440294       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c0d5ff9212eb4d5532fe9dc9affa7331ae4ff1f5f5eb3a2e8e42b0133c616a70] <==
	I0610 12:05:46.324517       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:06:15.873423       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:06:16.335334       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:06:45.878207       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:06:46.346107       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:07:15.884749       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:07:16.355198       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:07:45.889833       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:07:46.362711       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0610 12:08:11.025814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="428.265µs"
	E0610 12:08:15.895995       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:08:16.369563       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0610 12:08:26.019428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="202.193µs"
	E0610 12:08:45.902550       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:08:46.378406       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:09:15.907657       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:09:16.386109       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:09:45.913432       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:09:46.393373       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:10:15.921540       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:10:16.404906       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:10:45.927574       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:10:46.414726       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:11:15.935389       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:11:16.422346       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [26ba3974695bcb24d4b2cc8663b2aa027f6b410c22fea995bdcb40dfbd617433] <==
	I0610 12:02:18.619506       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:02:18.644097       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.222"]
	I0610 12:02:18.717027       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:02:18.717076       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:02:18.717094       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:02:18.721203       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:02:18.721428       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:02:18.721512       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:02:18.722857       1 config.go:192] "Starting service config controller"
	I0610 12:02:18.722887       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:02:18.722913       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:02:18.722917       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:02:18.723640       1 config.go:319] "Starting node config controller"
	I0610 12:02:18.723678       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:02:18.823586       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:02:18.823571       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:02:18.823821       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [622e3a8adfbcb60a4cf30c281f0c60f9d7c3bff06b1cf111b2cc27d0692eebf5] <==
	W0610 12:02:00.455285       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 12:02:00.455310       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 12:02:01.325571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 12:02:01.325614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 12:02:01.334373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 12:02:01.334446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 12:02:01.347034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 12:02:01.347124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 12:02:01.518886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 12:02:01.519561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 12:02:01.536357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 12:02:01.536641       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 12:02:01.543964       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 12:02:01.544118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 12:02:01.591526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 12:02:01.592685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 12:02:01.641782       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 12:02:01.641811       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 12:02:01.652518       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 12:02:01.652560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 12:02:01.670081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 12:02:01.671304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 12:02:01.681136       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 12:02:01.681234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:02:04.839248       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 12:09:03 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:09:03.019438    3901 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:09:03 default-k8s-diff-port-281114 kubelet[3901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:09:03 default-k8s-diff-port-281114 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:09:03 default-k8s-diff-port-281114 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:09:03 default-k8s-diff-port-281114 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:09:06 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:09:06.004376    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:09:18 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:09:18.003415    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:09:32 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:09:32.004170    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:09:47 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:09:47.002967    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:09:59 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:09:59.005200    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:10:03 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:10:03.021669    3901 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:10:03 default-k8s-diff-port-281114 kubelet[3901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:10:03 default-k8s-diff-port-281114 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:10:03 default-k8s-diff-port-281114 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:10:03 default-k8s-diff-port-281114 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:10:11 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:10:11.004653    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:10:26 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:10:26.003442    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:10:41 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:10:41.003318    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:10:56 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:10:56.003705    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:11:03 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:11:03.022987    3901 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:11:03 default-k8s-diff-port-281114 kubelet[3901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:11:03 default-k8s-diff-port-281114 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:11:03 default-k8s-diff-port-281114 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:11:03 default-k8s-diff-port-281114 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:11:11 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:11:11.003508    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	
	
	==> storage-provisioner [e665a2fb5aecc808097f2fc05d79904e306ff78e8236dae6c9f7e09bce5e7d10] <==
	I0610 12:02:18.986580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 12:02:19.001826       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 12:02:19.001932       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 12:02:19.016570       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 12:02:19.016928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b7af0c67-f70d-456a-83bf-769aabe5eb5d", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-281114_03d23b81-e99f-4aae-8541-a05706e8c2c8 became leader
	I0610 12:02:19.016961       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-281114_03d23b81-e99f-4aae-8541-a05706e8c2c8!
	I0610 12:02:19.125265       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-281114_03d23b81-e99f-4aae-8541-a05706e8c2c8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-281114 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-j58s9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-281114 describe pod metrics-server-569cc877fc-j58s9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-281114 describe pod metrics-server-569cc877fc-j58s9: exit status 1 (96.325991ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-j58s9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-281114 describe pod metrics-server-569cc877fc-j58s9: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (391.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832735 -n embed-certs-832735
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-10 12:09:08.222883306 +0000 UTC m=+6501.308914800
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-832735 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-832735 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.672µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-832735 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832735 -n embed-certs-832735
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-832735 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-832735 logs -n 25: (1.210436911s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-832735            | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC | 10 Jun 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	| addons  | enable metrics-server -p no-preload-298179             | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-832735                 | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-166693        | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-298179                  | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:49 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-166693             | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281114  | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC | 10 Jun 24 11:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC |                     |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281114       | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC | 10 Jun 24 12:02 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 12:08 UTC | 10 Jun 24 12:08 UTC |
	| start   | -p newest-cni-003554 --memory=2200 --alsologtostderr   | newest-cni-003554            | jenkins | v1.33.1 | 10 Jun 24 12:08 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 12:08:59
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 12:08:59.994526   64548 out.go:291] Setting OutFile to fd 1 ...
	I0610 12:08:59.994815   64548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:08:59.994828   64548 out.go:304] Setting ErrFile to fd 2...
	I0610 12:08:59.994835   64548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:08:59.995138   64548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 12:08:59.995795   64548 out.go:298] Setting JSON to false
	I0610 12:08:59.996814   64548 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6681,"bootTime":1718014659,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 12:08:59.996904   64548 start.go:139] virtualization: kvm guest
	I0610 12:09:00.000114   64548 out.go:177] * [newest-cni-003554] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 12:09:00.001669   64548 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 12:09:00.001682   64548 notify.go:220] Checking for updates...
	I0610 12:09:00.003096   64548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 12:09:00.004624   64548 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 12:09:00.006046   64548 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 12:09:00.007540   64548 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 12:09:00.009038   64548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 12:09:00.010984   64548 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:09:00.011106   64548 config.go:182] Loaded profile config "embed-certs-832735": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:09:00.011218   64548 config.go:182] Loaded profile config "no-preload-298179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:09:00.011318   64548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 12:09:00.050256   64548 out.go:177] * Using the kvm2 driver based on user configuration
	I0610 12:09:00.051611   64548 start.go:297] selected driver: kvm2
	I0610 12:09:00.051630   64548 start.go:901] validating driver "kvm2" against <nil>
	I0610 12:09:00.051646   64548 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 12:09:00.052713   64548 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:09:00.052807   64548 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 12:09:00.069438   64548 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 12:09:00.069499   64548 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0610 12:09:00.069537   64548 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0610 12:09:00.069764   64548 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0610 12:09:00.069792   64548 cni.go:84] Creating CNI manager for ""
	I0610 12:09:00.069806   64548 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 12:09:00.069826   64548 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 12:09:00.069925   64548 start.go:340] cluster config:
	{Name:newest-cni-003554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:newest-cni-003554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:09:00.070095   64548 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:09:00.072380   64548 out.go:177] * Starting "newest-cni-003554" primary control-plane node in "newest-cni-003554" cluster
	I0610 12:09:00.073746   64548 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 12:09:00.073794   64548 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 12:09:00.073805   64548 cache.go:56] Caching tarball of preloaded images
	I0610 12:09:00.073957   64548 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 12:09:00.073980   64548 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 12:09:00.074081   64548 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/newest-cni-003554/config.json ...
	I0610 12:09:00.074107   64548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/newest-cni-003554/config.json: {Name:mk4a68a5d1eaf0cc693a610cec3e6cf480f8bc12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:09:00.074297   64548 start.go:360] acquireMachinesLock for newest-cni-003554: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:09:00.074334   64548 start.go:364] duration metric: took 20.494µs to acquireMachinesLock for "newest-cni-003554"
	I0610 12:09:00.074357   64548 start.go:93] Provisioning new machine with config: &{Name:newest-cni-003554 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:newest-cni-003554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 12:09:00.074434   64548 start.go:125] createHost starting for "" (driver="kvm2")
	I0610 12:09:00.076224   64548 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 12:09:00.076379   64548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:09:00.076432   64548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:09:00.093824   64548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39421
	I0610 12:09:00.094259   64548 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:09:00.094888   64548 main.go:141] libmachine: Using API Version  1
	I0610 12:09:00.094909   64548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:09:00.095257   64548 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:09:00.095492   64548 main.go:141] libmachine: (newest-cni-003554) Calling .GetMachineName
	I0610 12:09:00.095694   64548 main.go:141] libmachine: (newest-cni-003554) Calling .DriverName
	I0610 12:09:00.095876   64548 start.go:159] libmachine.API.Create for "newest-cni-003554" (driver="kvm2")
	I0610 12:09:00.095901   64548 client.go:168] LocalClient.Create starting
	I0610 12:09:00.095934   64548 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 12:09:00.095971   64548 main.go:141] libmachine: Decoding PEM data...
	I0610 12:09:00.095986   64548 main.go:141] libmachine: Parsing certificate...
	I0610 12:09:00.096035   64548 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 12:09:00.096054   64548 main.go:141] libmachine: Decoding PEM data...
	I0610 12:09:00.096068   64548 main.go:141] libmachine: Parsing certificate...
	I0610 12:09:00.096091   64548 main.go:141] libmachine: Running pre-create checks...
	I0610 12:09:00.096101   64548 main.go:141] libmachine: (newest-cni-003554) Calling .PreCreateCheck
	I0610 12:09:00.096414   64548 main.go:141] libmachine: (newest-cni-003554) Calling .GetConfigRaw
	I0610 12:09:00.096823   64548 main.go:141] libmachine: Creating machine...
	I0610 12:09:00.096836   64548 main.go:141] libmachine: (newest-cni-003554) Calling .Create
	I0610 12:09:00.097018   64548 main.go:141] libmachine: (newest-cni-003554) Creating KVM machine...
	I0610 12:09:00.098379   64548 main.go:141] libmachine: (newest-cni-003554) DBG | found existing default KVM network
	I0610 12:09:00.099516   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:00.099382   64572 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a3:4e:93} reservation:<nil>}
	I0610 12:09:00.100343   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:00.100252   64572 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:0c:3e:0d} reservation:<nil>}
	I0610 12:09:00.101141   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:00.101055   64572 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:e5:5b:4b} reservation:<nil>}
	I0610 12:09:00.102223   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:00.102130   64572 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002e7100}
	I0610 12:09:00.102259   64548 main.go:141] libmachine: (newest-cni-003554) DBG | created network xml: 
	I0610 12:09:00.102270   64548 main.go:141] libmachine: (newest-cni-003554) DBG | <network>
	I0610 12:09:00.102279   64548 main.go:141] libmachine: (newest-cni-003554) DBG |   <name>mk-newest-cni-003554</name>
	I0610 12:09:00.102289   64548 main.go:141] libmachine: (newest-cni-003554) DBG |   <dns enable='no'/>
	I0610 12:09:00.102306   64548 main.go:141] libmachine: (newest-cni-003554) DBG |   
	I0610 12:09:00.102329   64548 main.go:141] libmachine: (newest-cni-003554) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0610 12:09:00.102348   64548 main.go:141] libmachine: (newest-cni-003554) DBG |     <dhcp>
	I0610 12:09:00.102381   64548 main.go:141] libmachine: (newest-cni-003554) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0610 12:09:00.102408   64548 main.go:141] libmachine: (newest-cni-003554) DBG |     </dhcp>
	I0610 12:09:00.102419   64548 main.go:141] libmachine: (newest-cni-003554) DBG |   </ip>
	I0610 12:09:00.102428   64548 main.go:141] libmachine: (newest-cni-003554) DBG |   
	I0610 12:09:00.102437   64548 main.go:141] libmachine: (newest-cni-003554) DBG | </network>
	I0610 12:09:00.102444   64548 main.go:141] libmachine: (newest-cni-003554) DBG | 
	I0610 12:09:00.107982   64548 main.go:141] libmachine: (newest-cni-003554) DBG | trying to create private KVM network mk-newest-cni-003554 192.168.72.0/24...
	I0610 12:09:00.188714   64548 main.go:141] libmachine: (newest-cni-003554) DBG | private KVM network mk-newest-cni-003554 192.168.72.0/24 created
	I0610 12:09:00.188767   64548 main.go:141] libmachine: (newest-cni-003554) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/newest-cni-003554 ...
	I0610 12:09:00.188782   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:00.188709   64572 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 12:09:00.188802   64548 main.go:141] libmachine: (newest-cni-003554) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 12:09:00.188850   64548 main.go:141] libmachine: (newest-cni-003554) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 12:09:00.443125   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:00.442984   64572 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/newest-cni-003554/id_rsa...
	I0610 12:09:00.520765   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:00.520664   64572 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/newest-cni-003554/newest-cni-003554.rawdisk...
	I0610 12:09:00.520795   64548 main.go:141] libmachine: (newest-cni-003554) DBG | Writing magic tar header
	I0610 12:09:00.520807   64548 main.go:141] libmachine: (newest-cni-003554) DBG | Writing SSH key tar header
	I0610 12:09:00.520816   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:00.520778   64572 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/newest-cni-003554 ...
	I0610 12:09:00.520975   64548 main.go:141] libmachine: (newest-cni-003554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/newest-cni-003554
	I0610 12:09:00.521005   64548 main.go:141] libmachine: (newest-cni-003554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 12:09:00.521020   64548 main.go:141] libmachine: (newest-cni-003554) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/newest-cni-003554 (perms=drwx------)
	I0610 12:09:00.521032   64548 main.go:141] libmachine: (newest-cni-003554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 12:09:00.521045   64548 main.go:141] libmachine: (newest-cni-003554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 12:09:00.521055   64548 main.go:141] libmachine: (newest-cni-003554) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 12:09:00.521091   64548 main.go:141] libmachine: (newest-cni-003554) DBG | Checking permissions on dir: /home/jenkins
	I0610 12:09:00.521131   64548 main.go:141] libmachine: (newest-cni-003554) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 12:09:00.521146   64548 main.go:141] libmachine: (newest-cni-003554) DBG | Checking permissions on dir: /home
	I0610 12:09:00.521159   64548 main.go:141] libmachine: (newest-cni-003554) DBG | Skipping /home - not owner
	I0610 12:09:00.521170   64548 main.go:141] libmachine: (newest-cni-003554) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 12:09:00.521179   64548 main.go:141] libmachine: (newest-cni-003554) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 12:09:00.521187   64548 main.go:141] libmachine: (newest-cni-003554) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 12:09:00.521198   64548 main.go:141] libmachine: (newest-cni-003554) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 12:09:00.521210   64548 main.go:141] libmachine: (newest-cni-003554) Creating domain...
	I0610 12:09:00.522379   64548 main.go:141] libmachine: (newest-cni-003554) define libvirt domain using xml: 
	I0610 12:09:00.522395   64548 main.go:141] libmachine: (newest-cni-003554) <domain type='kvm'>
	I0610 12:09:00.522403   64548 main.go:141] libmachine: (newest-cni-003554)   <name>newest-cni-003554</name>
	I0610 12:09:00.522412   64548 main.go:141] libmachine: (newest-cni-003554)   <memory unit='MiB'>2200</memory>
	I0610 12:09:00.522439   64548 main.go:141] libmachine: (newest-cni-003554)   <vcpu>2</vcpu>
	I0610 12:09:00.522456   64548 main.go:141] libmachine: (newest-cni-003554)   <features>
	I0610 12:09:00.522468   64548 main.go:141] libmachine: (newest-cni-003554)     <acpi/>
	I0610 12:09:00.522479   64548 main.go:141] libmachine: (newest-cni-003554)     <apic/>
	I0610 12:09:00.522485   64548 main.go:141] libmachine: (newest-cni-003554)     <pae/>
	I0610 12:09:00.522492   64548 main.go:141] libmachine: (newest-cni-003554)     
	I0610 12:09:00.522497   64548 main.go:141] libmachine: (newest-cni-003554)   </features>
	I0610 12:09:00.522505   64548 main.go:141] libmachine: (newest-cni-003554)   <cpu mode='host-passthrough'>
	I0610 12:09:00.522512   64548 main.go:141] libmachine: (newest-cni-003554)   
	I0610 12:09:00.522517   64548 main.go:141] libmachine: (newest-cni-003554)   </cpu>
	I0610 12:09:00.522525   64548 main.go:141] libmachine: (newest-cni-003554)   <os>
	I0610 12:09:00.522529   64548 main.go:141] libmachine: (newest-cni-003554)     <type>hvm</type>
	I0610 12:09:00.522537   64548 main.go:141] libmachine: (newest-cni-003554)     <boot dev='cdrom'/>
	I0610 12:09:00.522542   64548 main.go:141] libmachine: (newest-cni-003554)     <boot dev='hd'/>
	I0610 12:09:00.522550   64548 main.go:141] libmachine: (newest-cni-003554)     <bootmenu enable='no'/>
	I0610 12:09:00.522554   64548 main.go:141] libmachine: (newest-cni-003554)   </os>
	I0610 12:09:00.522594   64548 main.go:141] libmachine: (newest-cni-003554)   <devices>
	I0610 12:09:00.522623   64548 main.go:141] libmachine: (newest-cni-003554)     <disk type='file' device='cdrom'>
	I0610 12:09:00.522636   64548 main.go:141] libmachine: (newest-cni-003554)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/newest-cni-003554/boot2docker.iso'/>
	I0610 12:09:00.522647   64548 main.go:141] libmachine: (newest-cni-003554)       <target dev='hdc' bus='scsi'/>
	I0610 12:09:00.522661   64548 main.go:141] libmachine: (newest-cni-003554)       <readonly/>
	I0610 12:09:00.522671   64548 main.go:141] libmachine: (newest-cni-003554)     </disk>
	I0610 12:09:00.522682   64548 main.go:141] libmachine: (newest-cni-003554)     <disk type='file' device='disk'>
	I0610 12:09:00.522699   64548 main.go:141] libmachine: (newest-cni-003554)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 12:09:00.522717   64548 main.go:141] libmachine: (newest-cni-003554)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/newest-cni-003554/newest-cni-003554.rawdisk'/>
	I0610 12:09:00.522728   64548 main.go:141] libmachine: (newest-cni-003554)       <target dev='hda' bus='virtio'/>
	I0610 12:09:00.522741   64548 main.go:141] libmachine: (newest-cni-003554)     </disk>
	I0610 12:09:00.522752   64548 main.go:141] libmachine: (newest-cni-003554)     <interface type='network'>
	I0610 12:09:00.522766   64548 main.go:141] libmachine: (newest-cni-003554)       <source network='mk-newest-cni-003554'/>
	I0610 12:09:00.522781   64548 main.go:141] libmachine: (newest-cni-003554)       <model type='virtio'/>
	I0610 12:09:00.522793   64548 main.go:141] libmachine: (newest-cni-003554)     </interface>
	I0610 12:09:00.522804   64548 main.go:141] libmachine: (newest-cni-003554)     <interface type='network'>
	I0610 12:09:00.522816   64548 main.go:141] libmachine: (newest-cni-003554)       <source network='default'/>
	I0610 12:09:00.522827   64548 main.go:141] libmachine: (newest-cni-003554)       <model type='virtio'/>
	I0610 12:09:00.522838   64548 main.go:141] libmachine: (newest-cni-003554)     </interface>
	I0610 12:09:00.522853   64548 main.go:141] libmachine: (newest-cni-003554)     <serial type='pty'>
	I0610 12:09:00.522865   64548 main.go:141] libmachine: (newest-cni-003554)       <target port='0'/>
	I0610 12:09:00.522875   64548 main.go:141] libmachine: (newest-cni-003554)     </serial>
	I0610 12:09:00.522886   64548 main.go:141] libmachine: (newest-cni-003554)     <console type='pty'>
	I0610 12:09:00.522897   64548 main.go:141] libmachine: (newest-cni-003554)       <target type='serial' port='0'/>
	I0610 12:09:00.522909   64548 main.go:141] libmachine: (newest-cni-003554)     </console>
	I0610 12:09:00.522920   64548 main.go:141] libmachine: (newest-cni-003554)     <rng model='virtio'>
	I0610 12:09:00.522934   64548 main.go:141] libmachine: (newest-cni-003554)       <backend model='random'>/dev/random</backend>
	I0610 12:09:00.522948   64548 main.go:141] libmachine: (newest-cni-003554)     </rng>
	I0610 12:09:00.522959   64548 main.go:141] libmachine: (newest-cni-003554)     
	I0610 12:09:00.522970   64548 main.go:141] libmachine: (newest-cni-003554)     
	I0610 12:09:00.522981   64548 main.go:141] libmachine: (newest-cni-003554)   </devices>
	I0610 12:09:00.522994   64548 main.go:141] libmachine: (newest-cni-003554) </domain>
	I0610 12:09:00.523008   64548 main.go:141] libmachine: (newest-cni-003554) 
	I0610 12:09:00.527571   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:1a:4f:55 in network default
	I0610 12:09:00.528200   64548 main.go:141] libmachine: (newest-cni-003554) Ensuring networks are active...
	I0610 12:09:00.528219   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:8d:92:91 in network mk-newest-cni-003554
	I0610 12:09:00.528922   64548 main.go:141] libmachine: (newest-cni-003554) Ensuring network default is active
	I0610 12:09:00.529266   64548 main.go:141] libmachine: (newest-cni-003554) Ensuring network mk-newest-cni-003554 is active
	I0610 12:09:00.529788   64548 main.go:141] libmachine: (newest-cni-003554) Getting domain xml...
	I0610 12:09:00.530558   64548 main.go:141] libmachine: (newest-cni-003554) Creating domain...
	I0610 12:09:01.798485   64548 main.go:141] libmachine: (newest-cni-003554) Waiting to get IP...
	I0610 12:09:01.799434   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:8d:92:91 in network mk-newest-cni-003554
	I0610 12:09:01.799941   64548 main.go:141] libmachine: (newest-cni-003554) DBG | unable to find current IP address of domain newest-cni-003554 in network mk-newest-cni-003554
	I0610 12:09:01.799968   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:01.799908   64572 retry.go:31] will retry after 207.622782ms: waiting for machine to come up
	I0610 12:09:02.009643   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:8d:92:91 in network mk-newest-cni-003554
	I0610 12:09:02.010189   64548 main.go:141] libmachine: (newest-cni-003554) DBG | unable to find current IP address of domain newest-cni-003554 in network mk-newest-cni-003554
	I0610 12:09:02.010214   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:02.010154   64572 retry.go:31] will retry after 373.223223ms: waiting for machine to come up
	I0610 12:09:02.385567   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:8d:92:91 in network mk-newest-cni-003554
	I0610 12:09:02.386237   64548 main.go:141] libmachine: (newest-cni-003554) DBG | unable to find current IP address of domain newest-cni-003554 in network mk-newest-cni-003554
	I0610 12:09:02.386269   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:02.386181   64572 retry.go:31] will retry after 407.677835ms: waiting for machine to come up
	I0610 12:09:02.795483   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:8d:92:91 in network mk-newest-cni-003554
	I0610 12:09:02.795938   64548 main.go:141] libmachine: (newest-cni-003554) DBG | unable to find current IP address of domain newest-cni-003554 in network mk-newest-cni-003554
	I0610 12:09:02.795966   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:02.795884   64572 retry.go:31] will retry after 508.858062ms: waiting for machine to come up
	I0610 12:09:03.306684   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:8d:92:91 in network mk-newest-cni-003554
	I0610 12:09:03.307218   64548 main.go:141] libmachine: (newest-cni-003554) DBG | unable to find current IP address of domain newest-cni-003554 in network mk-newest-cni-003554
	I0610 12:09:03.307268   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:03.307164   64572 retry.go:31] will retry after 582.955867ms: waiting for machine to come up
	I0610 12:09:03.891607   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:8d:92:91 in network mk-newest-cni-003554
	I0610 12:09:03.892117   64548 main.go:141] libmachine: (newest-cni-003554) DBG | unable to find current IP address of domain newest-cni-003554 in network mk-newest-cni-003554
	I0610 12:09:03.892147   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:03.892065   64572 retry.go:31] will retry after 789.926258ms: waiting for machine to come up
	I0610 12:09:04.684145   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:8d:92:91 in network mk-newest-cni-003554
	I0610 12:09:04.684552   64548 main.go:141] libmachine: (newest-cni-003554) DBG | unable to find current IP address of domain newest-cni-003554 in network mk-newest-cni-003554
	I0610 12:09:04.684581   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:04.684506   64572 retry.go:31] will retry after 814.046254ms: waiting for machine to come up
	
	
	==> CRI-O <==
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.852362522Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021348852338529,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b8a0959-3e39-47bf-af07-8c2abeed3823 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.852952401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70138488-eade-4bc0-ac41-77b9fb0c48f3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.853021272Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70138488-eade-4bc0-ac41-77b9fb0c48f3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.853287908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020182077963619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba048d2e541288c34094ca550643148bb0b678c978c73d61f1d5e05a37221409,PodSandboxId:1ae5f2ccfc7b21cf9a3d8c640b4451a279b94de084c199fc4f85a661935aef90,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718020161756999359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5a24d2e-a638-4a3c-bd49-8c6f5c07b55b,},Annotations:map[string]string{io.kubernetes.container.hash: e1a981bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933,PodSandboxId:923f47493ca157b932694bb125b000a5098d73225de284ba506ace381c9bec54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020158905307912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7dlzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2618cd-b48c-44bd-a07d-4fe4585a14fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2e716d93,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb,PodSandboxId:520c8f4f7df845a87160476ca3b69e4518730eb6fb678f6f7f6c8e6584a15b68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020151236006901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7x2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe1cd055-691f-46b1-a
da7-7dded31d2308,},Annotations:map[string]string{io.kubernetes.container.hash: 26a6f7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718020151226592147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e
0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9,PodSandboxId:332276b6ad39dc96b4106806b7d77b06f1db626468eae1d34cd7c0fb674d5ffc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020147588004348,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165c62b8eb6ccf1956b1ca8d650bbbf1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c,PodSandboxId:9a64ac451ab433068e46583db1b28db0e3920ec45344d20ced406a5a7294fd0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020147577750081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f460092c2c832cd821e0ae3b0d1c7dae,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: fa055ffe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29,PodSandboxId:38e659f103b780fd8f5e98550704fcf98f1361ec0501bcb94ba51dbf158e2b23,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020147605046866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8f26a120a10c36d3480d7e942d748f,},Annotations:map[string]string{io.kubernetes.container.hash:
d59a1a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43,PodSandboxId:44d07e419bbf8db720588bfefe8724f72a30ce268ec55872513035ac188fb1af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020147590822623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4938d9e608e2b1641472107eb959dd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=70138488-eade-4bc0-ac41-77b9fb0c48f3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.891619099Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=afc3bfae-3a95-455d-9bba-aa8c1dad1e08 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.891694809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afc3bfae-3a95-455d-9bba-aa8c1dad1e08 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.892849111Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e42f2f44-ab52-4738-b1e9-450ea3e50bc2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.893395689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021348893360327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e42f2f44-ab52-4738-b1e9-450ea3e50bc2 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.893892746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9328947f-f7f3-4f66-a453-54ca47256b71 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.893945145Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9328947f-f7f3-4f66-a453-54ca47256b71 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.894163653Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020182077963619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba048d2e541288c34094ca550643148bb0b678c978c73d61f1d5e05a37221409,PodSandboxId:1ae5f2ccfc7b21cf9a3d8c640b4451a279b94de084c199fc4f85a661935aef90,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718020161756999359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5a24d2e-a638-4a3c-bd49-8c6f5c07b55b,},Annotations:map[string]string{io.kubernetes.container.hash: e1a981bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933,PodSandboxId:923f47493ca157b932694bb125b000a5098d73225de284ba506ace381c9bec54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020158905307912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7dlzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2618cd-b48c-44bd-a07d-4fe4585a14fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2e716d93,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb,PodSandboxId:520c8f4f7df845a87160476ca3b69e4518730eb6fb678f6f7f6c8e6584a15b68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020151236006901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7x2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe1cd055-691f-46b1-a
da7-7dded31d2308,},Annotations:map[string]string{io.kubernetes.container.hash: 26a6f7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718020151226592147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e
0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9,PodSandboxId:332276b6ad39dc96b4106806b7d77b06f1db626468eae1d34cd7c0fb674d5ffc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020147588004348,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165c62b8eb6ccf1956b1ca8d650bbbf1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c,PodSandboxId:9a64ac451ab433068e46583db1b28db0e3920ec45344d20ced406a5a7294fd0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020147577750081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f460092c2c832cd821e0ae3b0d1c7dae,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: fa055ffe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29,PodSandboxId:38e659f103b780fd8f5e98550704fcf98f1361ec0501bcb94ba51dbf158e2b23,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020147605046866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8f26a120a10c36d3480d7e942d748f,},Annotations:map[string]string{io.kubernetes.container.hash:
d59a1a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43,PodSandboxId:44d07e419bbf8db720588bfefe8724f72a30ce268ec55872513035ac188fb1af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020147590822623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4938d9e608e2b1641472107eb959dd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9328947f-f7f3-4f66-a453-54ca47256b71 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.944104137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05217a8a-8e80-47fb-8ee0-d635d21fe684 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.944191608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05217a8a-8e80-47fb-8ee0-d635d21fe684 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.946156204Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1207085f-a4bd-441f-b01c-55fac6b1f692 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.946762201Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021348946728706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1207085f-a4bd-441f-b01c-55fac6b1f692 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.947906345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12dcbc6c-4a7d-4194-984c-1255694063cc name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.948044809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12dcbc6c-4a7d-4194-984c-1255694063cc name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.948598706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020182077963619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba048d2e541288c34094ca550643148bb0b678c978c73d61f1d5e05a37221409,PodSandboxId:1ae5f2ccfc7b21cf9a3d8c640b4451a279b94de084c199fc4f85a661935aef90,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718020161756999359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5a24d2e-a638-4a3c-bd49-8c6f5c07b55b,},Annotations:map[string]string{io.kubernetes.container.hash: e1a981bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933,PodSandboxId:923f47493ca157b932694bb125b000a5098d73225de284ba506ace381c9bec54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020158905307912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7dlzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2618cd-b48c-44bd-a07d-4fe4585a14fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2e716d93,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb,PodSandboxId:520c8f4f7df845a87160476ca3b69e4518730eb6fb678f6f7f6c8e6584a15b68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020151236006901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7x2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe1cd055-691f-46b1-a
da7-7dded31d2308,},Annotations:map[string]string{io.kubernetes.container.hash: 26a6f7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718020151226592147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e
0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9,PodSandboxId:332276b6ad39dc96b4106806b7d77b06f1db626468eae1d34cd7c0fb674d5ffc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020147588004348,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165c62b8eb6ccf1956b1ca8d650bbbf1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c,PodSandboxId:9a64ac451ab433068e46583db1b28db0e3920ec45344d20ced406a5a7294fd0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020147577750081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f460092c2c832cd821e0ae3b0d1c7dae,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: fa055ffe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29,PodSandboxId:38e659f103b780fd8f5e98550704fcf98f1361ec0501bcb94ba51dbf158e2b23,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020147605046866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8f26a120a10c36d3480d7e942d748f,},Annotations:map[string]string{io.kubernetes.container.hash:
d59a1a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43,PodSandboxId:44d07e419bbf8db720588bfefe8724f72a30ce268ec55872513035ac188fb1af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020147590822623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4938d9e608e2b1641472107eb959dd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12dcbc6c-4a7d-4194-984c-1255694063cc name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.992418311Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50e546b6-4e54-45fa-9a97-1acb25a510db name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.992615184Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50e546b6-4e54-45fa-9a97-1acb25a510db name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.994462588Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ac31d8b-e34e-4fae-a2e4-c68fe49b8947 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.994894414Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021348994872432,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ac31d8b-e34e-4fae-a2e4-c68fe49b8947 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.995706610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3618c1f-3d23-4c55-be65-8025ef579c6b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.995772886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3618c1f-3d23-4c55-be65-8025ef579c6b name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:08 embed-certs-832735 crio[734]: time="2024-06-10 12:09:08.996479699Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020182077963619,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba048d2e541288c34094ca550643148bb0b678c978c73d61f1d5e05a37221409,PodSandboxId:1ae5f2ccfc7b21cf9a3d8c640b4451a279b94de084c199fc4f85a661935aef90,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1718020161756999359,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5a24d2e-a638-4a3c-bd49-8c6f5c07b55b,},Annotations:map[string]string{io.kubernetes.container.hash: e1a981bb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933,PodSandboxId:923f47493ca157b932694bb125b000a5098d73225de284ba506ace381c9bec54,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020158905307912,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7dlzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b2618cd-b48c-44bd-a07d-4fe4585a14fa,},Annotations:map[string]string{io.kubernetes.container.hash: 2e716d93,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb,PodSandboxId:520c8f4f7df845a87160476ca3b69e4518730eb6fb678f6f7f6c8e6584a15b68,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020151236006901,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7x2p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe1cd055-691f-46b1-a
da7-7dded31d2308,},Annotations:map[string]string{io.kubernetes.container.hash: 26a6f7ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262,PodSandboxId:6426cdc85c4e032d630d1f3f20e3a1d911b05b5724564b52378b60625d241c19,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1718020151226592147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47aa143e-3545-492d-ac93-e62f0076e
0f4,},Annotations:map[string]string{io.kubernetes.container.hash: 5af6a72a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9,PodSandboxId:332276b6ad39dc96b4106806b7d77b06f1db626468eae1d34cd7c0fb674d5ffc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020147588004348,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 165c62b8eb6ccf1956b1ca8d650bbbf1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c,PodSandboxId:9a64ac451ab433068e46583db1b28db0e3920ec45344d20ced406a5a7294fd0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020147577750081,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f460092c2c832cd821e0ae3b0d1c7dae,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: fa055ffe,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29,PodSandboxId:38e659f103b780fd8f5e98550704fcf98f1361ec0501bcb94ba51dbf158e2b23,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020147605046866,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f8f26a120a10c36d3480d7e942d748f,},Annotations:map[string]string{io.kubernetes.container.hash:
d59a1a0c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43,PodSandboxId:44d07e419bbf8db720588bfefe8724f72a30ce268ec55872513035ac188fb1af,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020147590822623,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-832735,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de4938d9e608e2b1641472107eb959dd,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3618c1f-3d23-4c55-be65-8025ef579c6b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5509696f5a811       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   6426cdc85c4e0       storage-provisioner
	ba048d2e54128       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   1ae5f2ccfc7b2       busybox
	04ef0964178ae       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   923f47493ca15       coredns-7db6d8ff4d-7dlzb
	3c7292ccdd40d       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      19 minutes ago      Running             kube-proxy                1                   520c8f4f7df84       kube-proxy-b7x2p
	8d8bc4b6855e1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   6426cdc85c4e0       storage-provisioner
	61727f8f43e1d       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      20 minutes ago      Running             kube-apiserver            1                   38e659f103b78       kube-apiserver-embed-certs-832735
	7badb7b66c71f       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      20 minutes ago      Running             kube-controller-manager   1                   44d07e419bbf8       kube-controller-manager-embed-certs-832735
	7afbab9bcf1ac       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      20 minutes ago      Running             kube-scheduler            1                   332276b6ad39d       kube-scheduler-embed-certs-832735
	0c16d9960d9ab       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      20 minutes ago      Running             etcd                      1                   9a64ac451ab43       etcd-embed-certs-832735
	
	
	==> coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33457 - 57325 "HINFO IN 4448557384152593783.2575088353663232798. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015730222s
	
	
	==> describe nodes <==
	Name:               embed-certs-832735
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-832735
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=embed-certs-832735
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T11_39_58_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:39:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-832735
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:09:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 12:04:58 +0000   Mon, 10 Jun 2024 11:39:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 12:04:58 +0000   Mon, 10 Jun 2024 11:39:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 12:04:58 +0000   Mon, 10 Jun 2024 11:39:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 12:04:58 +0000   Mon, 10 Jun 2024 11:49:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.19
	  Hostname:    embed-certs-832735
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 afe682491c8144db9ef90386aaf4c58e
	  System UUID:                afe68249-1c81-44db-9ef9-0386aaf4c58e
	  Boot ID:                    8914484a-56f6-42c0-b4ac-3c6b90f63b0e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-7db6d8ff4d-7dlzb                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-832735                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-embed-certs-832735             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-embed-certs-832735    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-b7x2p                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-832735             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-569cc877fc-5zg8j               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-832735 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-832735 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-832735 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-832735 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-832735 event: Registered Node embed-certs-832735 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-832735 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-832735 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-832735 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-832735 event: Registered Node embed-certs-832735 in Controller
	
	
	==> dmesg <==
	[Jun10 11:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.062373] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050432] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.034773] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.853603] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.376409] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.597653] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.061973] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056388] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.156545] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.135766] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +0.280231] systemd-fstab-generator[719]: Ignoring "noauto" option for root device
	[Jun10 11:49] systemd-fstab-generator[817]: Ignoring "noauto" option for root device
	[  +2.489149] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.064187] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.524571] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.479242] systemd-fstab-generator[1552]: Ignoring "noauto" option for root device
	[  +3.232159] kauditd_printk_skb: 62 callbacks suppressed
	[  +6.684754] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] <==
	{"level":"info","ts":"2024-06-10T11:49:08.999276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d18a13a55fd66152 elected leader d18a13a55fd66152 at term 3"}
	{"level":"info","ts":"2024-06-10T11:49:09.008338Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d18a13a55fd66152","local-member-attributes":"{Name:embed-certs-832735 ClientURLs:[https://192.168.61.19:2379]}","request-path":"/0/members/d18a13a55fd66152/attributes","cluster-id":"5cb6c2b3fa543b56","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T11:49:09.008465Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:49:09.008533Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:49:09.011097Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T11:49:09.011165Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T11:49:09.016848Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.19:2379"}
	{"level":"info","ts":"2024-06-10T11:49:09.026431Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-06-10T11:57:04.196339Z","caller":"traceutil/trace.go:171","msg":"trace[490671863] transaction","detail":"{read_only:false; response_revision:953; number_of_response:1; }","duration":"577.007146ms","start":"2024-06-10T11:57:03.61928Z","end":"2024-06-10T11:57:04.196287Z","steps":["trace[490671863] 'process raft request'  (duration: 576.509689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T11:57:04.19777Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T11:57:03.619255Z","time spent":"577.783784ms","remote":"127.0.0.1:49968","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:952 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-06-10T11:57:04.19816Z","caller":"traceutil/trace.go:171","msg":"trace[2102256102] linearizableReadLoop","detail":"{readStateIndex:1086; appliedIndex:1085; }","duration":"457.455868ms","start":"2024-06-10T11:57:03.738465Z","end":"2024-06-10T11:57:04.195921Z","steps":["trace[2102256102] 'read index received'  (duration: 457.268418ms)","trace[2102256102] 'applied index is now lower than readState.Index'  (duration: 186.724µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T11:57:04.198294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"459.837238ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T11:57:04.198443Z","caller":"traceutil/trace.go:171","msg":"trace[1450497661] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:953; }","duration":"460.007293ms","start":"2024-06-10T11:57:03.73842Z","end":"2024-06-10T11:57:04.198427Z","steps":["trace[1450497661] 'agreement among raft nodes before linearized reading'  (duration: 459.835419ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T11:57:04.198497Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T11:57:03.738407Z","time spent":"460.08014ms","remote":"127.0.0.1:50000","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2024-06-10T11:57:04.198824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.389096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T11:57:04.199139Z","caller":"traceutil/trace.go:171","msg":"trace[993035526] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:953; }","duration":"264.722516ms","start":"2024-06-10T11:57:03.934408Z","end":"2024-06-10T11:57:04.19913Z","steps":["trace[993035526] 'agreement among raft nodes before linearized reading'  (duration: 264.389842ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:59:09.07328Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":810}
	{"level":"info","ts":"2024-06-10T11:59:09.083576Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":810,"took":"9.937223ms","hash":311689356,"current-db-size-bytes":2564096,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2564096,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-06-10T11:59:09.083633Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":311689356,"revision":810,"compact-revision":-1}
	{"level":"info","ts":"2024-06-10T12:04:09.083194Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1052}
	{"level":"info","ts":"2024-06-10T12:04:09.08734Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1052,"took":"3.782738ms","hash":4280538156,"current-db-size-bytes":2564096,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1642496,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-06-10T12:04:09.087388Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4280538156,"revision":1052,"compact-revision":810}
	{"level":"info","ts":"2024-06-10T12:09:09.090345Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1295}
	{"level":"info","ts":"2024-06-10T12:09:09.094037Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1295,"took":"3.362007ms","hash":1120908848,"current-db-size-bytes":2564096,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1613824,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-06-10T12:09:09.094121Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1120908848,"revision":1295,"compact-revision":1052}
	
	
	==> kernel <==
	 12:09:09 up 20 min,  0 users,  load average: 0.51, 0.19, 0.07
	Linux embed-certs-832735 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] <==
	I0610 12:02:11.429935       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:04:10.431460       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:04:10.431857       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0610 12:04:11.432535       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:04:11.432769       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:04:11.432866       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:04:11.432662       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:04:11.432991       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:04:11.434969       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:05:11.434051       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:05:11.434281       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:05:11.434294       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:05:11.435167       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:05:11.435281       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:05:11.435346       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:07:11.434754       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:07:11.435116       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:07:11.435150       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:07:11.435910       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:07:11.435952       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:07:11.437114       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] <==
	I0610 12:03:24.439548       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:03:53.962685       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:03:54.446895       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:04:23.968984       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:04:24.454337       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:04:53.973823       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:04:54.462529       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0610 12:05:23.886466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="243.75µs"
	E0610 12:05:23.978219       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:05:24.470626       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0610 12:05:34.883204       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="202.462µs"
	E0610 12:05:53.983832       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:05:54.477615       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:06:23.990518       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:06:24.484912       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:06:53.994830       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:06:54.492693       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:07:24.001470       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:07:24.499751       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:07:54.006285       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:07:54.508854       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:08:24.011561       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:08:24.516898       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:08:54.016493       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:08:54.523974       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] <==
	I0610 11:49:11.465131       1 server_linux.go:69] "Using iptables proxy"
	I0610 11:49:11.477896       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.19"]
	I0610 11:49:11.531590       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 11:49:11.531641       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 11:49:11.531658       1 server_linux.go:165] "Using iptables Proxier"
	I0610 11:49:11.534364       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 11:49:11.534893       1 server.go:872] "Version info" version="v1.30.1"
	I0610 11:49:11.534924       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:49:11.538387       1 config.go:192] "Starting service config controller"
	I0610 11:49:11.538458       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 11:49:11.538539       1 config.go:101] "Starting endpoint slice config controller"
	I0610 11:49:11.538581       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 11:49:11.540749       1 config.go:319] "Starting node config controller"
	I0610 11:49:11.540777       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 11:49:11.639723       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 11:49:11.639808       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:49:11.641925       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] <==
	I0610 11:49:08.891370       1 serving.go:380] Generated self-signed cert in-memory
	W0610 11:49:10.379333       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 11:49:10.379413       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 11:49:10.379424       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 11:49:10.379430       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 11:49:10.414577       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0610 11:49:10.414625       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:49:10.418276       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0610 11:49:10.418361       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 11:49:10.418381       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 11:49:10.418397       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 11:49:10.519265       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 12:07:04 embed-certs-832735 kubelet[947]: E0610 12:07:04.866461     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:07:05 embed-certs-832735 kubelet[947]: E0610 12:07:05.884029     947 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:07:05 embed-certs-832735 kubelet[947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:07:05 embed-certs-832735 kubelet[947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:07:05 embed-certs-832735 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:07:05 embed-certs-832735 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:07:15 embed-certs-832735 kubelet[947]: E0610 12:07:15.868110     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:07:26 embed-certs-832735 kubelet[947]: E0610 12:07:26.866217     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:07:38 embed-certs-832735 kubelet[947]: E0610 12:07:38.866303     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:07:51 embed-certs-832735 kubelet[947]: E0610 12:07:51.867025     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:08:05 embed-certs-832735 kubelet[947]: E0610 12:08:05.868954     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:08:05 embed-certs-832735 kubelet[947]: E0610 12:08:05.884251     947 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:08:05 embed-certs-832735 kubelet[947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:08:05 embed-certs-832735 kubelet[947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:08:05 embed-certs-832735 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:08:05 embed-certs-832735 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:08:19 embed-certs-832735 kubelet[947]: E0610 12:08:19.867287     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:08:34 embed-certs-832735 kubelet[947]: E0610 12:08:34.865744     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:08:45 embed-certs-832735 kubelet[947]: E0610 12:08:45.865878     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:08:56 embed-certs-832735 kubelet[947]: E0610 12:08:56.866670     947 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5zg8j" podUID="e979b4b0-356d-479d-990f-d9e6e46a1a9b"
	Jun 10 12:09:05 embed-certs-832735 kubelet[947]: E0610 12:09:05.885919     947 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:09:05 embed-certs-832735 kubelet[947]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:09:05 embed-certs-832735 kubelet[947]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:09:05 embed-certs-832735 kubelet[947]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:09:05 embed-certs-832735 kubelet[947]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] <==
	I0610 11:49:42.166129       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 11:49:42.177420       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 11:49:42.177530       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 11:49:59.576289       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 11:49:59.576460       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-832735_580a0a61-44ec-48ce-9195-bda17322e0ce!
	I0610 11:49:59.578291       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"78273e0e-e224-448f-8e85-7cd63396fc44", APIVersion:"v1", ResourceVersion:"593", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-832735_580a0a61-44ec-48ce-9195-bda17322e0ce became leader
	I0610 11:49:59.676873       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-832735_580a0a61-44ec-48ce-9195-bda17322e0ce!
	
	
	==> storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] <==
	I0610 11:49:11.401301       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0610 11:49:41.411619       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832735 -n embed-certs-832735
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-832735 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5zg8j
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-832735 describe pod metrics-server-569cc877fc-5zg8j
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-832735 describe pod metrics-server-569cc877fc-5zg8j: exit status 1 (80.686315ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5zg8j" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-832735 describe pod metrics-server-569cc877fc-5zg8j: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (391.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (395.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298179 -n no-preload-298179
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-10 12:09:22.721781359 +0000 UTC m=+6515.807812850
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-298179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-298179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.547µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-298179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298179 -n no-preload-298179
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-298179 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-298179 logs -n 25: (1.266684245s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	| addons  | enable metrics-server -p no-preload-298179             | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-832735                 | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-166693        | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-298179                  | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:49 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-166693             | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281114  | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC | 10 Jun 24 11:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC |                     |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281114       | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC | 10 Jun 24 12:02 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 12:08 UTC | 10 Jun 24 12:08 UTC |
	| start   | -p newest-cni-003554 --memory=2200 --alsologtostderr   | newest-cni-003554            | jenkins | v1.33.1 | 10 Jun 24 12:08 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 12:09 UTC | 10 Jun 24 12:09 UTC |
	| start   | -p auto-491653 --memory=3072                           | auto-491653                  | jenkins | v1.33.1 | 10 Jun 24 12:09 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 12:09:11
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 12:09:11.259264   64909 out.go:291] Setting OutFile to fd 1 ...
	I0610 12:09:11.259528   64909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:09:11.259538   64909 out.go:304] Setting ErrFile to fd 2...
	I0610 12:09:11.259545   64909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:09:11.259807   64909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 12:09:11.260439   64909 out.go:298] Setting JSON to false
	I0610 12:09:11.261571   64909 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6692,"bootTime":1718014659,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 12:09:11.261634   64909 start.go:139] virtualization: kvm guest
	I0610 12:09:11.264495   64909 out.go:177] * [auto-491653] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 12:09:11.265817   64909 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 12:09:11.267170   64909 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 12:09:11.265852   64909 notify.go:220] Checking for updates...
	I0610 12:09:11.268985   64909 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 12:09:11.270363   64909 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 12:09:11.271680   64909 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 12:09:11.273080   64909 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 12:09:11.274955   64909 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:09:11.275127   64909 config.go:182] Loaded profile config "newest-cni-003554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:09:11.275277   64909 config.go:182] Loaded profile config "no-preload-298179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:09:11.275383   64909 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 12:09:11.312915   64909 out.go:177] * Using the kvm2 driver based on user configuration
	I0610 12:09:11.314322   64909 start.go:297] selected driver: kvm2
	I0610 12:09:11.314338   64909 start.go:901] validating driver "kvm2" against <nil>
	I0610 12:09:11.314350   64909 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 12:09:11.315173   64909 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:09:11.315268   64909 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 12:09:11.331925   64909 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 12:09:11.331992   64909 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 12:09:11.332197   64909 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:09:11.332254   64909 cni.go:84] Creating CNI manager for ""
	I0610 12:09:11.332265   64909 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 12:09:11.332273   64909 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 12:09:11.332330   64909 start.go:340] cluster config:
	{Name:auto-491653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:auto-491653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:09:11.332417   64909 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:09:11.334321   64909 out.go:177] * Starting "auto-491653" primary control-plane node in "auto-491653" cluster
	I0610 12:09:11.936831   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:8d:92:91 in network mk-newest-cni-003554
	I0610 12:09:11.937435   64548 main.go:141] libmachine: (newest-cni-003554) DBG | unable to find current IP address of domain newest-cni-003554 in network mk-newest-cni-003554
	I0610 12:09:11.937458   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:11.937382   64572 retry.go:31] will retry after 2.890402507s: waiting for machine to come up
	I0610 12:09:14.830331   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:8d:92:91 in network mk-newest-cni-003554
	I0610 12:09:14.831061   64548 main.go:141] libmachine: (newest-cni-003554) DBG | unable to find current IP address of domain newest-cni-003554 in network mk-newest-cni-003554
	I0610 12:09:14.831086   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:14.830994   64572 retry.go:31] will retry after 2.738716002s: waiting for machine to come up
	I0610 12:09:11.335659   64909 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 12:09:11.335691   64909 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 12:09:11.335697   64909 cache.go:56] Caching tarball of preloaded images
	I0610 12:09:11.335766   64909 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 12:09:11.335778   64909 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 12:09:11.335865   64909 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/auto-491653/config.json ...
	I0610 12:09:11.335882   64909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/auto-491653/config.json: {Name:mkb0839ccf5413a21c9c3dadcc36c2794de180c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:09:11.336023   64909 start.go:360] acquireMachinesLock for auto-491653: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:09:17.572482   64548 main.go:141] libmachine: (newest-cni-003554) DBG | domain newest-cni-003554 has defined MAC address 52:54:00:8d:92:91 in network mk-newest-cni-003554
	I0610 12:09:17.572970   64548 main.go:141] libmachine: (newest-cni-003554) DBG | unable to find current IP address of domain newest-cni-003554 in network mk-newest-cni-003554
	I0610 12:09:17.573000   64548 main.go:141] libmachine: (newest-cni-003554) DBG | I0610 12:09:17.572927   64572 retry.go:31] will retry after 3.905794402s: waiting for machine to come up
	
	
	==> CRI-O <==
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.393326302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021363393302576,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7edd5353-c960-4632-b0af-b0826721f945 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.393902034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92ce2d3b-64bb-4d58-9424-d1931d3fef59 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.393958439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92ce2d3b-64bb-4d58-9424-d1931d3fef59 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.394286621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:683e59037f5932468d2405bbd3fd52d77ce5ad62e1759892e8d937191e057437,PodSandboxId:deee4653c7072b7c169a0567c8244abb526ea2a11a4098043cf947cc0401f0f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020424861177652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 783f523c-4c21-4ae0-bc18-9c391e7342b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1f746830,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2e411906f14885e5c4a5b5164f742d7283e55c02bc310f8571b5ab021ce97e,PodSandboxId:66fc0cde87620c4b46299ad7ab86b3173f3a617d0a268e2cd36b76691ca25c43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424338795209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f622z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16cb4de3-afa9-4e45-bc85-e51273973808,},Annotations:map[string]string{io.kubernetes.container.hash: 7a12602a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a066b8539f611e071a9acfaeb6cc35563e3b55b5b270b17884aa8c2432be6a3,PodSandboxId:714f0a77adfbb94747c437f5a2a45f6ffee84236ddbe67f02786e139d992252e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424386935854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9mqrm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
69d670-dffa-4526-8117-0b44df04554a,},Annotations:map[string]string{io.kubernetes.container.hash: c5356ac7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fddfd1132be797ed9025b8977067f68a9016051286041ed4ee3c38d3225136cd,PodSandboxId:6cc15e22a4c6ea6bfddd088767d080ae4f8dc0dc95bbbf793e0d9c05ab802627,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718020423442794498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fhndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f848e7-44f6-4ab1-bf94-3189733abca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7a55cea4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782d58441abcdd0791ae72b44e699f9f6a4c30867e4aec8eca2a0338dbaf33d0,PodSandboxId:72debdf12a31460f1dd1edbbb4834b7f471970978d402dc3360db0d240cfc374,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020404548079400,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af991281f76a9c4d496d9158234dfc48,},Annotations:map[string]string{io.kubernetes.container.hash: 29667b85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba6ee23d7a88b9d4aae2cad62cb70292ab5ff9a7f85aa6cef1aa90959382e9b,PodSandboxId:bb9cc9dfa0362795f02853309767ab44429a06bbf87b8887ee52eb4d7f379e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020404524877296,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7787f0f4433798238ba6c479ed8cbe,},Annotations:map[string]string{io.kubernetes.container.hash: 44495568,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a7b43ca72a0fe56bf21afcae51fd55480c85f73a08bd848fd2884f99005058,PodSandboxId:9d35d2e40c9b05e62daeb3ac27d37eaa125bbd4abd15f4321c57fa3cb327f4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020404512735021,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbed13fc899dffe5489a781ad246db8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d5466fc0761ffafa56f8b58377652ecea0499411a50a90195f70039ad5ab9b,PodSandboxId:4b4cb53ff65abad35f6a102515e7e9a5c01be3e536f533a75a32ca4259afbb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020404424265060,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dba9ebe3c0b0b9d3dac53b9b8aedb7,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92ce2d3b-64bb-4d58-9424-d1931d3fef59 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.437727205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be4d59b6-774d-453a-af3f-c8069fcada6d name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.437804398Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be4d59b6-774d-453a-af3f-c8069fcada6d name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.439136551Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f82a0fa9-c689-4788-b80a-fb23c0dd5f88 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.439461456Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021363439440307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f82a0fa9-c689-4788-b80a-fb23c0dd5f88 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.439857955Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ee3179e-9d03-4b0b-a9ad-f4614a392e28 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.439908497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ee3179e-9d03-4b0b-a9ad-f4614a392e28 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.440144303Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:683e59037f5932468d2405bbd3fd52d77ce5ad62e1759892e8d937191e057437,PodSandboxId:deee4653c7072b7c169a0567c8244abb526ea2a11a4098043cf947cc0401f0f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020424861177652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 783f523c-4c21-4ae0-bc18-9c391e7342b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1f746830,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2e411906f14885e5c4a5b5164f742d7283e55c02bc310f8571b5ab021ce97e,PodSandboxId:66fc0cde87620c4b46299ad7ab86b3173f3a617d0a268e2cd36b76691ca25c43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424338795209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f622z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16cb4de3-afa9-4e45-bc85-e51273973808,},Annotations:map[string]string{io.kubernetes.container.hash: 7a12602a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a066b8539f611e071a9acfaeb6cc35563e3b55b5b270b17884aa8c2432be6a3,PodSandboxId:714f0a77adfbb94747c437f5a2a45f6ffee84236ddbe67f02786e139d992252e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424386935854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9mqrm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
69d670-dffa-4526-8117-0b44df04554a,},Annotations:map[string]string{io.kubernetes.container.hash: c5356ac7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fddfd1132be797ed9025b8977067f68a9016051286041ed4ee3c38d3225136cd,PodSandboxId:6cc15e22a4c6ea6bfddd088767d080ae4f8dc0dc95bbbf793e0d9c05ab802627,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718020423442794498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fhndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f848e7-44f6-4ab1-bf94-3189733abca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7a55cea4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782d58441abcdd0791ae72b44e699f9f6a4c30867e4aec8eca2a0338dbaf33d0,PodSandboxId:72debdf12a31460f1dd1edbbb4834b7f471970978d402dc3360db0d240cfc374,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020404548079400,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af991281f76a9c4d496d9158234dfc48,},Annotations:map[string]string{io.kubernetes.container.hash: 29667b85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba6ee23d7a88b9d4aae2cad62cb70292ab5ff9a7f85aa6cef1aa90959382e9b,PodSandboxId:bb9cc9dfa0362795f02853309767ab44429a06bbf87b8887ee52eb4d7f379e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020404524877296,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7787f0f4433798238ba6c479ed8cbe,},Annotations:map[string]string{io.kubernetes.container.hash: 44495568,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a7b43ca72a0fe56bf21afcae51fd55480c85f73a08bd848fd2884f99005058,PodSandboxId:9d35d2e40c9b05e62daeb3ac27d37eaa125bbd4abd15f4321c57fa3cb327f4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020404512735021,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbed13fc899dffe5489a781ad246db8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d5466fc0761ffafa56f8b58377652ecea0499411a50a90195f70039ad5ab9b,PodSandboxId:4b4cb53ff65abad35f6a102515e7e9a5c01be3e536f533a75a32ca4259afbb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020404424265060,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dba9ebe3c0b0b9d3dac53b9b8aedb7,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ee3179e-9d03-4b0b-a9ad-f4614a392e28 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.486014057Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9efcac86-37fe-45cd-a5c5-31d282900426 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.486207433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9efcac86-37fe-45cd-a5c5-31d282900426 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.487604739Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9b7e66b-2657-48b3-9c87-d93812052724 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.488005628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021363487985782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9b7e66b-2657-48b3-9c87-d93812052724 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.488761277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2d43e69a-80ed-4705-96ed-3af747b1bab3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.488810174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2d43e69a-80ed-4705-96ed-3af747b1bab3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.488991961Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:683e59037f5932468d2405bbd3fd52d77ce5ad62e1759892e8d937191e057437,PodSandboxId:deee4653c7072b7c169a0567c8244abb526ea2a11a4098043cf947cc0401f0f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020424861177652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 783f523c-4c21-4ae0-bc18-9c391e7342b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1f746830,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2e411906f14885e5c4a5b5164f742d7283e55c02bc310f8571b5ab021ce97e,PodSandboxId:66fc0cde87620c4b46299ad7ab86b3173f3a617d0a268e2cd36b76691ca25c43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424338795209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f622z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16cb4de3-afa9-4e45-bc85-e51273973808,},Annotations:map[string]string{io.kubernetes.container.hash: 7a12602a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a066b8539f611e071a9acfaeb6cc35563e3b55b5b270b17884aa8c2432be6a3,PodSandboxId:714f0a77adfbb94747c437f5a2a45f6ffee84236ddbe67f02786e139d992252e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424386935854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9mqrm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
69d670-dffa-4526-8117-0b44df04554a,},Annotations:map[string]string{io.kubernetes.container.hash: c5356ac7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fddfd1132be797ed9025b8977067f68a9016051286041ed4ee3c38d3225136cd,PodSandboxId:6cc15e22a4c6ea6bfddd088767d080ae4f8dc0dc95bbbf793e0d9c05ab802627,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718020423442794498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fhndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f848e7-44f6-4ab1-bf94-3189733abca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7a55cea4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782d58441abcdd0791ae72b44e699f9f6a4c30867e4aec8eca2a0338dbaf33d0,PodSandboxId:72debdf12a31460f1dd1edbbb4834b7f471970978d402dc3360db0d240cfc374,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020404548079400,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af991281f76a9c4d496d9158234dfc48,},Annotations:map[string]string{io.kubernetes.container.hash: 29667b85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba6ee23d7a88b9d4aae2cad62cb70292ab5ff9a7f85aa6cef1aa90959382e9b,PodSandboxId:bb9cc9dfa0362795f02853309767ab44429a06bbf87b8887ee52eb4d7f379e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020404524877296,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7787f0f4433798238ba6c479ed8cbe,},Annotations:map[string]string{io.kubernetes.container.hash: 44495568,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a7b43ca72a0fe56bf21afcae51fd55480c85f73a08bd848fd2884f99005058,PodSandboxId:9d35d2e40c9b05e62daeb3ac27d37eaa125bbd4abd15f4321c57fa3cb327f4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020404512735021,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbed13fc899dffe5489a781ad246db8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d5466fc0761ffafa56f8b58377652ecea0499411a50a90195f70039ad5ab9b,PodSandboxId:4b4cb53ff65abad35f6a102515e7e9a5c01be3e536f533a75a32ca4259afbb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020404424265060,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dba9ebe3c0b0b9d3dac53b9b8aedb7,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2d43e69a-80ed-4705-96ed-3af747b1bab3 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.523575971Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba82d85a-bbb5-4b56-99ac-cdf9e6f20290 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.523646570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba82d85a-bbb5-4b56-99ac-cdf9e6f20290 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.524533375Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b93e41cb-7947-4431-ae76-beab8dc0f802 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.524909600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021363524888873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99934,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b93e41cb-7947-4431-ae76-beab8dc0f802 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.525338158Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1f0dfd13-f5b9-4963-a7c3-e0df773bd36c name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.525388306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1f0dfd13-f5b9-4963-a7c3-e0df773bd36c name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:09:23 no-preload-298179 crio[723]: time="2024-06-10 12:09:23.525556615Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:683e59037f5932468d2405bbd3fd52d77ce5ad62e1759892e8d937191e057437,PodSandboxId:deee4653c7072b7c169a0567c8244abb526ea2a11a4098043cf947cc0401f0f9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020424861177652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 783f523c-4c21-4ae0-bc18-9c391e7342b0,},Annotations:map[string]string{io.kubernetes.container.hash: 1f746830,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b2e411906f14885e5c4a5b5164f742d7283e55c02bc310f8571b5ab021ce97e,PodSandboxId:66fc0cde87620c4b46299ad7ab86b3173f3a617d0a268e2cd36b76691ca25c43,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424338795209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-f622z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16cb4de3-afa9-4e45-bc85-e51273973808,},Annotations:map[string]string{io.kubernetes.container.hash: 7a12602a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a066b8539f611e071a9acfaeb6cc35563e3b55b5b270b17884aa8c2432be6a3,PodSandboxId:714f0a77adfbb94747c437f5a2a45f6ffee84236ddbe67f02786e139d992252e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020424386935854,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9mqrm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62
69d670-dffa-4526-8117-0b44df04554a,},Annotations:map[string]string{io.kubernetes.container.hash: c5356ac7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fddfd1132be797ed9025b8977067f68a9016051286041ed4ee3c38d3225136cd,PodSandboxId:6cc15e22a4c6ea6bfddd088767d080ae4f8dc0dc95bbbf793e0d9c05ab802627,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:
1718020423442794498,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fhndh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f848e7-44f6-4ab1-bf94-3189733abca2,},Annotations:map[string]string{io.kubernetes.container.hash: 7a55cea4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782d58441abcdd0791ae72b44e699f9f6a4c30867e4aec8eca2a0338dbaf33d0,PodSandboxId:72debdf12a31460f1dd1edbbb4834b7f471970978d402dc3360db0d240cfc374,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1718020404548079400,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af991281f76a9c4d496d9158234dfc48,},Annotations:map[string]string{io.kubernetes.container.hash: 29667b85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cba6ee23d7a88b9d4aae2cad62cb70292ab5ff9a7f85aa6cef1aa90959382e9b,PodSandboxId:bb9cc9dfa0362795f02853309767ab44429a06bbf87b8887ee52eb4d7f379e1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020404524877296,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac7787f0f4433798238ba6c479ed8cbe,},Annotations:map[string]string{io.kubernetes.container.hash: 44495568,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07a7b43ca72a0fe56bf21afcae51fd55480c85f73a08bd848fd2884f99005058,PodSandboxId:9d35d2e40c9b05e62daeb3ac27d37eaa125bbd4abd15f4321c57fa3cb327f4cd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020404512735021,Labels:map[string]string{io.kubernetes.container.name: kube-sched
uler,io.kubernetes.pod.name: kube-scheduler-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fbed13fc899dffe5489a781ad246db8,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d5466fc0761ffafa56f8b58377652ecea0499411a50a90195f70039ad5ab9b,PodSandboxId:4b4cb53ff65abad35f6a102515e7e9a5c01be3e536f533a75a32ca4259afbb37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1718020404424265060,Labels:map[string]string{io.kubernetes.container.name: kube-controlle
r-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-298179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8dba9ebe3c0b0b9d3dac53b9b8aedb7,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1f0dfd13-f5b9-4963-a7c3-e0df773bd36c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	683e59037f593       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   deee4653c7072       storage-provisioner
	5a066b8539f61       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   714f0a77adfbb       coredns-7db6d8ff4d-9mqrm
	7b2e411906f14       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   66fc0cde87620       coredns-7db6d8ff4d-f622z
	fddfd1132be79       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   15 minutes ago      Running             kube-proxy                0                   6cc15e22a4c6e       kube-proxy-fhndh
	782d58441abcd       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   72debdf12a314       etcd-no-preload-298179
	cba6ee23d7a88       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   15 minutes ago      Running             kube-apiserver            2                   bb9cc9dfa0362       kube-apiserver-no-preload-298179
	07a7b43ca72a0       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   15 minutes ago      Running             kube-scheduler            2                   9d35d2e40c9b0       kube-scheduler-no-preload-298179
	20d5466fc0761       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   15 minutes ago      Running             kube-controller-manager   2                   4b4cb53ff65ab       kube-controller-manager-no-preload-298179
	
	
	==> coredns [5a066b8539f611e071a9acfaeb6cc35563e3b55b5b270b17884aa8c2432be6a3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [7b2e411906f14885e5c4a5b5164f742d7283e55c02bc310f8571b5ab021ce97e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-298179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-298179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=no-preload-298179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T11_53_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 11:53:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-298179
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:09:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 12:09:08 +0000   Mon, 10 Jun 2024 11:53:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 12:09:08 +0000   Mon, 10 Jun 2024 11:53:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 12:09:08 +0000   Mon, 10 Jun 2024 11:53:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 12:09:08 +0000   Mon, 10 Jun 2024 11:53:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    no-preload-298179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 29602cfed4bd43bfa2d60195b75916d2
	  System UUID:                29602cfe-d4bd-43bf-a2d6-0195b75916d2
	  Boot ID:                    d0445246-42cf-4286-a8eb-214294939a5d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-9mqrm                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-f622z                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-no-preload-298179                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-298179             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-298179    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-fhndh                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-no-preload-298179             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-jp7dr              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node no-preload-298179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node no-preload-298179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node no-preload-298179 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node no-preload-298179 event: Registered Node no-preload-298179 in Controller
	
	
	==> dmesg <==
	[  +0.042998] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.613186] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.883261] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.560534] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.011373] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.056246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061587] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.162937] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.141107] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.294949] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[ +16.054144] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.059586] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.600492] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +3.872291] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.411535] kauditd_printk_skb: 37 callbacks suppressed
	[  +6.614635] kauditd_printk_skb: 35 callbacks suppressed
	[Jun10 11:53] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.520447] systemd-fstab-generator[4036]: Ignoring "noauto" option for root device
	[  +6.053381] systemd-fstab-generator[4360]: Ignoring "noauto" option for root device
	[  +0.071960] kauditd_printk_skb: 53 callbacks suppressed
	[ +13.753071] systemd-fstab-generator[4574]: Ignoring "noauto" option for root device
	[  +0.107073] kauditd_printk_skb: 12 callbacks suppressed
	[Jun10 11:54] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [782d58441abcdd0791ae72b44e699f9f6a4c30867e4aec8eca2a0338dbaf33d0] <==
	{"level":"info","ts":"2024-06-10T11:53:25.369358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 received MsgVoteResp from 7a50af7ffd27cbe1 at term 2"}
	{"level":"info","ts":"2024-06-10T11:53:25.369386Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7a50af7ffd27cbe1 became leader at term 2"}
	{"level":"info","ts":"2024-06-10T11:53:25.369415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7a50af7ffd27cbe1 elected leader 7a50af7ffd27cbe1 at term 2"}
	{"level":"info","ts":"2024-06-10T11:53:25.373967Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7a50af7ffd27cbe1","local-member-attributes":"{Name:no-preload-298179 ClientURLs:[https://192.168.39.48:2379]}","request-path":"/0/members/7a50af7ffd27cbe1/attributes","cluster-id":"59383b002ca7add2","publish-timeout":"7s"}
	{"level":"info","ts":"2024-06-10T11:53:25.374098Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:53:25.374183Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:53:25.379088Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-06-10T11:53:25.379127Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-06-10T11:53:25.374213Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-06-10T11:53:25.381282Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"59383b002ca7add2","local-member-id":"7a50af7ffd27cbe1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:53:25.381378Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:53:25.381422Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-06-10T11:53:25.382681Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.48:2379"}
	{"level":"info","ts":"2024-06-10T11:53:25.384793Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-06-10T11:57:04.035715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.347392ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14691181797706206169 > lease_revoke:<id:4be19001fef68b8a>","response":"size:27"}
	{"level":"info","ts":"2024-06-10T11:57:04.03644Z","caller":"traceutil/trace.go:171","msg":"trace[196209639] linearizableReadLoop","detail":"{readStateIndex:667; appliedIndex:666; }","duration":"188.134143ms","start":"2024-06-10T11:57:03.848262Z","end":"2024-06-10T11:57:04.036396Z","steps":["trace[196209639] 'read index received'  (duration: 23.523µs)","trace[196209639] 'applied index is now lower than readState.Index'  (duration: 188.109146ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T11:57:04.036753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.431877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T11:57:04.036823Z","caller":"traceutil/trace.go:171","msg":"trace[63460844] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:609; }","duration":"188.569051ms","start":"2024-06-10T11:57:03.848235Z","end":"2024-06-10T11:57:04.036804Z","steps":["trace[63460844] 'agreement among raft nodes before linearized reading'  (duration: 188.424802ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T11:57:04.198602Z","caller":"traceutil/trace.go:171","msg":"trace[1682754429] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"116.932566ms","start":"2024-06-10T11:57:04.081632Z","end":"2024-06-10T11:57:04.198564Z","steps":["trace[1682754429] 'process raft request'  (duration: 116.766038ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:03:25.429505Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":675}
	{"level":"info","ts":"2024-06-10T12:03:25.438843Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":675,"took":"8.874758ms","hash":94939584,"current-db-size-bytes":2084864,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2084864,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-06-10T12:03:25.438918Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":94939584,"revision":675,"compact-revision":-1}
	{"level":"info","ts":"2024-06-10T12:08:25.437633Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":917}
	{"level":"info","ts":"2024-06-10T12:08:25.443331Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":917,"took":"4.306333ms","hash":1803407319,"current-db-size-bytes":2084864,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1531904,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-06-10T12:08:25.443426Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1803407319,"revision":917,"compact-revision":675}
	
	
	==> kernel <==
	 12:09:23 up 21 min,  0 users,  load average: 0.18, 0.12, 0.11
	Linux no-preload-298179 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [cba6ee23d7a88b9d4aae2cad62cb70292ab5ff9a7f85aa6cef1aa90959382e9b] <==
	I0610 12:03:27.784500       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:04:27.784190       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:04:27.784585       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:04:27.784623       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:04:27.784754       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:04:27.784828       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:04:27.786670       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:06:27.784788       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:06:27.784935       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:06:27.784949       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:06:27.787274       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:06:27.787379       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:06:27.787398       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:08:26.790779       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:08:26.791535       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0610 12:08:27.791995       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:08:27.792085       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:08:27.792097       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:08:27.792157       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:08:27.792226       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:08:27.793388       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [20d5466fc0761ffafa56f8b58377652ecea0499411a50a90195f70039ad5ab9b] <==
	I0610 12:03:42.816272       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:04:12.136831       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:04:12.824789       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0610 12:04:38.607007       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="144.445µs"
	E0610 12:04:42.143790       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:04:42.832401       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0610 12:04:51.606734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="79.369µs"
	E0610 12:05:12.148768       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:05:12.840575       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:05:42.154770       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:05:42.849716       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:06:12.160850       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:06:12.860887       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:06:42.166704       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:06:42.869647       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:07:12.172283       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:07:12.879137       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:07:42.178128       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:07:42.889729       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:08:12.184753       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:08:12.898675       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:08:42.190657       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:08:42.906436       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:09:12.195298       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:09:12.914467       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [fddfd1132be797ed9025b8977067f68a9016051286041ed4ee3c38d3225136cd] <==
	I0610 11:53:43.780576       1 server_linux.go:69] "Using iptables proxy"
	I0610 11:53:43.812122       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.48"]
	I0610 11:53:43.911550       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 11:53:43.911603       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 11:53:43.911620       1 server_linux.go:165] "Using iptables Proxier"
	I0610 11:53:43.919783       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 11:53:43.920017       1 server.go:872] "Version info" version="v1.30.1"
	I0610 11:53:43.920087       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 11:53:43.921355       1 config.go:192] "Starting service config controller"
	I0610 11:53:43.921389       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 11:53:43.921415       1 config.go:101] "Starting endpoint slice config controller"
	I0610 11:53:43.921422       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 11:53:43.925806       1 config.go:319] "Starting node config controller"
	I0610 11:53:43.925831       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 11:53:44.022192       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 11:53:44.022256       1 shared_informer.go:320] Caches are synced for service config
	I0610 11:53:44.025953       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [07a7b43ca72a0fe56bf21afcae51fd55480c85f73a08bd848fd2884f99005058] <==
	W0610 11:53:26.800795       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 11:53:26.803177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 11:53:27.632491       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 11:53:27.632631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 11:53:27.697631       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 11:53:27.697697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0610 11:53:27.869455       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 11:53:27.869500       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 11:53:27.883792       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 11:53:27.883836       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0610 11:53:27.944357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 11:53:27.944432       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 11:53:28.032736       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 11:53:28.032888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 11:53:28.046125       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 11:53:28.046212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 11:53:28.048391       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 11:53:28.048457       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 11:53:28.070832       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 11:53:28.070881       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 11:53:28.090115       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 11:53:28.090225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 11:53:28.141238       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 11:53:28.141275       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0610 11:53:30.762643       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 12:06:29 no-preload-298179 kubelet[4367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:06:38 no-preload-298179 kubelet[4367]: E0610 12:06:38.589902    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:06:50 no-preload-298179 kubelet[4367]: E0610 12:06:50.590176    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:07:01 no-preload-298179 kubelet[4367]: E0610 12:07:01.590139    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:07:12 no-preload-298179 kubelet[4367]: E0610 12:07:12.589441    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:07:23 no-preload-298179 kubelet[4367]: E0610 12:07:23.589344    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:07:29 no-preload-298179 kubelet[4367]: E0610 12:07:29.609426    4367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:07:29 no-preload-298179 kubelet[4367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:07:29 no-preload-298179 kubelet[4367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:07:29 no-preload-298179 kubelet[4367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:07:29 no-preload-298179 kubelet[4367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:07:36 no-preload-298179 kubelet[4367]: E0610 12:07:36.589454    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:07:49 no-preload-298179 kubelet[4367]: E0610 12:07:49.589485    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:08:03 no-preload-298179 kubelet[4367]: E0610 12:08:03.589794    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:08:16 no-preload-298179 kubelet[4367]: E0610 12:08:16.589582    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:08:27 no-preload-298179 kubelet[4367]: E0610 12:08:27.588713    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:08:29 no-preload-298179 kubelet[4367]: E0610 12:08:29.614138    4367 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:08:29 no-preload-298179 kubelet[4367]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:08:29 no-preload-298179 kubelet[4367]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:08:29 no-preload-298179 kubelet[4367]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:08:29 no-preload-298179 kubelet[4367]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:08:39 no-preload-298179 kubelet[4367]: E0610 12:08:39.589211    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:08:53 no-preload-298179 kubelet[4367]: E0610 12:08:53.590843    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:09:04 no-preload-298179 kubelet[4367]: E0610 12:09:04.589569    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	Jun 10 12:09:18 no-preload-298179 kubelet[4367]: E0610 12:09:18.589005    4367 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-jp7dr" podUID="21136ae9-40d8-4857-aca5-47e3fa3b7e9c"
	
	
	==> storage-provisioner [683e59037f5932468d2405bbd3fd52d77ce5ad62e1759892e8d937191e057437] <==
	I0610 11:53:44.993994       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 11:53:45.015618       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 11:53:45.015676       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 11:53:45.035914       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 11:53:45.036716       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4915957e-92d9-4a4d-9131-fdfe380bf55e", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-298179_7d49b23d-b859-4601-8012-2b681d11b5b3 became leader
	I0610 11:53:45.036788       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-298179_7d49b23d-b859-4601-8012-2b681d11b5b3!
	I0610 11:53:45.137331       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-298179_7d49b23d-b859-4601-8012-2b681d11b5b3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-298179 -n no-preload-298179
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-298179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-jp7dr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-298179 describe pod metrics-server-569cc877fc-jp7dr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-298179 describe pod metrics-server-569cc877fc-jp7dr: exit status 1 (71.046452ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-jp7dr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-298179 describe pod metrics-server-569cc877fc-jp7dr: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (395.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (187.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
E0610 12:06:57.913559   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.34:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.34:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166693 -n old-k8s-version-166693
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 2 (237.256776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-166693" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-166693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-166693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.574µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-166693 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 2 (235.690955ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-166693 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-324836                              | cert-expiration-324836       | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	| delete  | -p                                                     | disable-driver-mounts-036579 | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:39 UTC |
	|         | disable-driver-mounts-036579                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:39 UTC | 10 Jun 24 11:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-832735            | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC | 10 Jun 24 11:40 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:40 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	| addons  | enable metrics-server -p no-preload-298179             | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:41 UTC | 10 Jun 24 11:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:44 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-832735                 | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-832735                                  | embed-certs-832735           | jenkins | v1.33.1 | 10 Jun 24 11:42 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-166693        | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-298179                  | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-685160                           | kubernetes-upgrade-685160    | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:49 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| start   | -p no-preload-298179                                   | no-preload-298179            | jenkins | v1.33.1 | 10 Jun 24 11:44 UTC | 10 Jun 24 11:53 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-166693             | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC | 10 Jun 24 11:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-166693                              | old-k8s-version-166693       | jenkins | v1.33.1 | 10 Jun 24 11:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-281114  | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC | 10 Jun 24 11:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:49 UTC |                     |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-281114       | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-281114 | jenkins | v1.33.1 | 10 Jun 24 11:51 UTC | 10 Jun 24 12:02 UTC |
	|         | default-k8s-diff-port-281114                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 11:51:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 11:51:53.675460   60146 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:51:53.675676   60146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:51:53.675684   60146 out.go:304] Setting ErrFile to fd 2...
	I0610 11:51:53.675688   60146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:51:53.675848   60146 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:51:53.676386   60146 out.go:298] Setting JSON to false
	I0610 11:51:53.677403   60146 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5655,"bootTime":1718014659,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:51:53.677465   60146 start.go:139] virtualization: kvm guest
	I0610 11:51:53.679851   60146 out.go:177] * [default-k8s-diff-port-281114] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:51:53.681209   60146 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:51:53.682492   60146 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:51:53.681162   60146 notify.go:220] Checking for updates...
	I0610 11:51:53.683939   60146 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:51:53.685202   60146 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:51:53.686363   60146 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:51:53.687770   60146 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:51:53.689668   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:51:53.690093   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.690167   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.705134   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0610 11:51:53.705647   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.706289   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.706314   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.706603   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.706788   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.707058   60146 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:51:53.707411   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.707451   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.722927   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45441
	I0610 11:51:53.723433   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.723927   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.723953   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.724482   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.724651   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.763209   60146 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 11:51:53.764436   60146 start.go:297] selected driver: kvm2
	I0610 11:51:53.764446   60146 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:51:53.764537   60146 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:51:53.765172   60146 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:51:53.765257   60146 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 11:51:53.782641   60146 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 11:51:53.783044   60146 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:51:53.783099   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:51:53.783109   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:51:53.783152   60146 start.go:340] cluster config:
	{Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:51:53.783254   60146 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 11:51:53.786018   60146 out.go:177] * Starting "default-k8s-diff-port-281114" primary control-plane node in "default-k8s-diff-port-281114" cluster
	I0610 11:51:53.787303   60146 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:51:53.787344   60146 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 11:51:53.787357   60146 cache.go:56] Caching tarball of preloaded images
	I0610 11:51:53.787439   60146 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 11:51:53.787455   60146 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 11:51:53.787569   60146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/config.json ...
	I0610 11:51:53.787799   60146 start.go:360] acquireMachinesLock for default-k8s-diff-port-281114: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:51:53.787855   60146 start.go:364] duration metric: took 30.27µs to acquireMachinesLock for "default-k8s-diff-port-281114"
	I0610 11:51:53.787875   60146 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:51:53.787881   60146 fix.go:54] fixHost starting: 
	I0610 11:51:53.788131   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:51:53.788165   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:51:53.805744   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35417
	I0610 11:51:53.806279   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:51:53.806909   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:51:53.806936   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:51:53.807346   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:51:53.807532   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.807718   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 11:51:53.809469   60146 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281114: state=Running err=<nil>
	W0610 11:51:53.809507   60146 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:51:53.811518   60146 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-281114" VM ...
	I0610 11:51:50.691535   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:52.691588   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:54.692007   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:54.248038   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:54.261302   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:54.261375   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:54.293194   57945 cri.go:89] found id: ""
	I0610 11:51:54.293228   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.293240   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:54.293247   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:54.293307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:54.326656   57945 cri.go:89] found id: ""
	I0610 11:51:54.326687   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.326699   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:54.326707   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:54.326764   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:54.359330   57945 cri.go:89] found id: ""
	I0610 11:51:54.359365   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.359378   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:54.359386   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:54.359450   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:54.391520   57945 cri.go:89] found id: ""
	I0610 11:51:54.391549   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.391558   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:54.391565   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:54.391642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:54.426803   57945 cri.go:89] found id: ""
	I0610 11:51:54.426840   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.426850   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:54.426860   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:54.426936   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:54.462618   57945 cri.go:89] found id: ""
	I0610 11:51:54.462645   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.462653   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:54.462659   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:54.462728   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:54.494599   57945 cri.go:89] found id: ""
	I0610 11:51:54.494631   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.494642   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:54.494650   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:54.494701   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:54.528236   57945 cri.go:89] found id: ""
	I0610 11:51:54.528265   57945 logs.go:276] 0 containers: []
	W0610 11:51:54.528280   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:54.528290   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:54.528305   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:54.579562   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:54.579604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:54.592871   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:54.592899   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:54.661928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:54.661950   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:54.661984   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:54.741578   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:54.741611   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:53.939312   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:55.940181   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:53.812752   60146 machine.go:94] provisionDockerMachine start ...
	I0610 11:51:53.812779   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:51:53.813001   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:51:53.815580   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:51:53.815981   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:47:50 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:51:53.816013   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:51:53.816111   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:51:53.816288   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:51:53.816435   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:51:53.816577   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:51:53.816743   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:51:53.817141   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:51:53.817157   60146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:51:56.705435   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:51:56.692515   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:59.192511   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:57.283397   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:51:57.296631   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:51:57.296704   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:51:57.328185   57945 cri.go:89] found id: ""
	I0610 11:51:57.328217   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.328228   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:51:57.328237   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:51:57.328302   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:51:57.360137   57945 cri.go:89] found id: ""
	I0610 11:51:57.360163   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.360173   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:51:57.360188   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:51:57.360244   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:51:57.395638   57945 cri.go:89] found id: ""
	I0610 11:51:57.395680   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.395691   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:51:57.395700   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:51:57.395765   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:51:57.429024   57945 cri.go:89] found id: ""
	I0610 11:51:57.429051   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.429062   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:51:57.429070   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:51:57.429132   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:51:57.461726   57945 cri.go:89] found id: ""
	I0610 11:51:57.461757   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.461767   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:51:57.461773   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:51:57.461838   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:51:57.495055   57945 cri.go:89] found id: ""
	I0610 11:51:57.495078   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.495086   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:51:57.495092   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:51:57.495138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:51:57.526495   57945 cri.go:89] found id: ""
	I0610 11:51:57.526521   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.526530   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:51:57.526536   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:51:57.526598   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:51:57.559160   57945 cri.go:89] found id: ""
	I0610 11:51:57.559181   57945 logs.go:276] 0 containers: []
	W0610 11:51:57.559189   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:51:57.559197   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:51:57.559212   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:51:57.593801   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:51:57.593827   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:51:57.641074   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:51:57.641106   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:51:57.654097   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:51:57.654124   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:51:57.726137   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:51:57.726160   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:51:57.726176   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:00.302303   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:00.314500   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:00.314560   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:00.345865   57945 cri.go:89] found id: ""
	I0610 11:52:00.345889   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.345897   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:00.345902   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:00.345946   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:00.377383   57945 cri.go:89] found id: ""
	I0610 11:52:00.377405   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.377412   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:00.377417   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:00.377482   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:00.408667   57945 cri.go:89] found id: ""
	I0610 11:52:00.408694   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.408701   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:00.408706   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:00.408755   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:00.444349   57945 cri.go:89] found id: ""
	I0610 11:52:00.444379   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.444390   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:00.444397   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:00.444455   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:00.477886   57945 cri.go:89] found id: ""
	I0610 11:52:00.477910   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.477918   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:00.477924   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:00.477982   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:00.508996   57945 cri.go:89] found id: ""
	I0610 11:52:00.509023   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.509030   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:00.509036   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:00.509097   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:00.541548   57945 cri.go:89] found id: ""
	I0610 11:52:00.541572   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.541580   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:00.541585   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:00.541642   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:00.574507   57945 cri.go:89] found id: ""
	I0610 11:52:00.574534   57945 logs.go:276] 0 containers: []
	W0610 11:52:00.574541   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:00.574550   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:00.574565   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:00.610838   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:00.610862   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:00.661155   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:00.661197   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:00.674122   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:00.674154   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:00.745943   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:00.745976   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:00.745993   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:51:58.439245   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:00.441145   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:51:59.777253   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:01.691833   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:04.193279   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:03.325365   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:03.337955   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:03.338042   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:03.370767   57945 cri.go:89] found id: ""
	I0610 11:52:03.370798   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.370810   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:03.370818   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:03.370903   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:03.402587   57945 cri.go:89] found id: ""
	I0610 11:52:03.402616   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.402623   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:03.402628   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:03.402684   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:03.436751   57945 cri.go:89] found id: ""
	I0610 11:52:03.436778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.436788   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:03.436795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:03.436854   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:03.467745   57945 cri.go:89] found id: ""
	I0610 11:52:03.467778   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.467788   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:03.467798   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:03.467865   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:03.499321   57945 cri.go:89] found id: ""
	I0610 11:52:03.499347   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.499355   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:03.499361   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:03.499419   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:03.534209   57945 cri.go:89] found id: ""
	I0610 11:52:03.534242   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.534253   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:03.534261   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:03.534318   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:03.567837   57945 cri.go:89] found id: ""
	I0610 11:52:03.567871   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.567882   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:03.567889   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:03.567954   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:03.604223   57945 cri.go:89] found id: ""
	I0610 11:52:03.604249   57945 logs.go:276] 0 containers: []
	W0610 11:52:03.604258   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:03.604266   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:03.604280   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:03.659716   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:03.659751   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:03.673389   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:03.673425   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:03.746076   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:03.746104   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:03.746118   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:03.825803   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:03.825837   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.362151   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:06.375320   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:06.375394   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:06.409805   57945 cri.go:89] found id: ""
	I0610 11:52:06.409840   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.409851   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:06.409859   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:06.409914   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:06.447126   57945 cri.go:89] found id: ""
	I0610 11:52:06.447157   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.447167   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:06.447174   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:06.447237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:06.479443   57945 cri.go:89] found id: ""
	I0610 11:52:06.479472   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.479483   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:06.479489   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:06.479546   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:06.511107   57945 cri.go:89] found id: ""
	I0610 11:52:06.511137   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.511148   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:06.511163   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:06.511223   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:06.542727   57945 cri.go:89] found id: ""
	I0610 11:52:06.542753   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.542761   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:06.542767   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:06.542812   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:06.582141   57945 cri.go:89] found id: ""
	I0610 11:52:06.582166   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.582174   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:06.582180   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:06.582239   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:06.615203   57945 cri.go:89] found id: ""
	I0610 11:52:06.615230   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.615240   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:06.615248   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:06.615314   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:06.650286   57945 cri.go:89] found id: ""
	I0610 11:52:06.650310   57945 logs.go:276] 0 containers: []
	W0610 11:52:06.650317   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:06.650326   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:06.650338   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:06.721601   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:06.721631   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:06.721646   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:06.794645   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:06.794679   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:06.830598   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:06.830628   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:06.880740   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:06.880786   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:02.939105   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:04.939366   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:07.439715   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:05.861224   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:06.691130   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:09.191608   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:09.394202   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:09.409822   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:09.409898   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:09.451573   57945 cri.go:89] found id: ""
	I0610 11:52:09.451597   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.451605   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:09.451611   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:09.451663   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:09.491039   57945 cri.go:89] found id: ""
	I0610 11:52:09.491069   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.491080   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:09.491087   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:09.491147   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:09.522023   57945 cri.go:89] found id: ""
	I0610 11:52:09.522050   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.522058   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:09.522063   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:09.522108   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:09.554014   57945 cri.go:89] found id: ""
	I0610 11:52:09.554040   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.554048   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:09.554057   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:09.554127   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:09.586285   57945 cri.go:89] found id: ""
	I0610 11:52:09.586318   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.586328   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:09.586336   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:09.586396   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:09.618362   57945 cri.go:89] found id: ""
	I0610 11:52:09.618391   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.618401   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:09.618408   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:09.618465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:09.651067   57945 cri.go:89] found id: ""
	I0610 11:52:09.651097   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.651108   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:09.651116   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:09.651174   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:09.682764   57945 cri.go:89] found id: ""
	I0610 11:52:09.682792   57945 logs.go:276] 0 containers: []
	W0610 11:52:09.682799   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:09.682807   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:09.682819   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:09.755071   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:09.755096   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:09.755109   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:09.833635   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:09.833672   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:09.869744   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:09.869777   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:09.924045   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:09.924079   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:09.440296   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:11.939025   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:08.929184   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:11.691213   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:13.693439   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:12.438029   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:12.452003   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:12.452070   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:12.485680   57945 cri.go:89] found id: ""
	I0610 11:52:12.485711   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.485719   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:12.485725   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:12.485773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:12.519200   57945 cri.go:89] found id: ""
	I0610 11:52:12.519227   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.519238   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:12.519245   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:12.519317   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:12.553154   57945 cri.go:89] found id: ""
	I0610 11:52:12.553179   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.553185   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:12.553191   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:12.553237   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:12.584499   57945 cri.go:89] found id: ""
	I0610 11:52:12.584543   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.584555   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:12.584564   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:12.584619   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:12.619051   57945 cri.go:89] found id: ""
	I0610 11:52:12.619079   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.619094   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:12.619102   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:12.619165   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:12.653652   57945 cri.go:89] found id: ""
	I0610 11:52:12.653690   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.653702   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:12.653710   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:12.653773   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:12.685887   57945 cri.go:89] found id: ""
	I0610 11:52:12.685919   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.685930   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:12.685938   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:12.685997   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:12.719534   57945 cri.go:89] found id: ""
	I0610 11:52:12.719567   57945 logs.go:276] 0 containers: []
	W0610 11:52:12.719578   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:12.719591   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:12.719603   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:12.770689   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:12.770725   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:12.783574   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:12.783604   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:12.855492   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:12.855518   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:12.855529   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:12.928993   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:12.929037   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:15.487670   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:15.501367   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:15.501437   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:15.534205   57945 cri.go:89] found id: ""
	I0610 11:52:15.534248   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.534256   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:15.534262   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:15.534315   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:15.570972   57945 cri.go:89] found id: ""
	I0610 11:52:15.571001   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.571008   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:15.571013   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:15.571073   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:15.604233   57945 cri.go:89] found id: ""
	I0610 11:52:15.604258   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.604267   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:15.604273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:15.604328   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:15.637119   57945 cri.go:89] found id: ""
	I0610 11:52:15.637150   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.637159   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:15.637167   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:15.637226   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:15.670548   57945 cri.go:89] found id: ""
	I0610 11:52:15.670572   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.670580   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:15.670586   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:15.670644   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:15.706374   57945 cri.go:89] found id: ""
	I0610 11:52:15.706398   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.706406   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:15.706412   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:15.706457   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:15.742828   57945 cri.go:89] found id: ""
	I0610 11:52:15.742852   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.742859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:15.742865   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:15.742910   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:15.773783   57945 cri.go:89] found id: ""
	I0610 11:52:15.773811   57945 logs.go:276] 0 containers: []
	W0610 11:52:15.773818   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:15.773825   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:15.773835   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:15.828725   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:15.828764   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:15.842653   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:15.842682   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:15.919771   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:15.919794   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:15.919809   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:15.994439   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:15.994478   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:13.943213   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:16.439647   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:15.009211   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:18.081244   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:16.191615   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:18.191760   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:18.532040   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:18.544800   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:18.544893   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:18.579148   57945 cri.go:89] found id: ""
	I0610 11:52:18.579172   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.579180   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:18.579186   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:18.579236   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:18.613005   57945 cri.go:89] found id: ""
	I0610 11:52:18.613028   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.613035   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:18.613042   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:18.613094   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:18.648843   57945 cri.go:89] found id: ""
	I0610 11:52:18.648870   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.648878   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:18.648883   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:18.648939   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:18.678943   57945 cri.go:89] found id: ""
	I0610 11:52:18.678974   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.679014   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:18.679022   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:18.679082   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:18.728485   57945 cri.go:89] found id: ""
	I0610 11:52:18.728516   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.728527   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:18.728535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:18.728605   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:18.764320   57945 cri.go:89] found id: ""
	I0610 11:52:18.764352   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.764363   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:18.764370   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:18.764431   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:18.797326   57945 cri.go:89] found id: ""
	I0610 11:52:18.797358   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.797369   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:18.797377   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:18.797440   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:18.832517   57945 cri.go:89] found id: ""
	I0610 11:52:18.832552   57945 logs.go:276] 0 containers: []
	W0610 11:52:18.832563   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:18.832574   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:18.832588   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:18.845158   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:18.845192   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:18.915928   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:18.915959   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:18.915974   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:18.990583   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:18.990625   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:19.029044   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:19.029069   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.582973   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:21.596373   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:21.596453   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:21.633497   57945 cri.go:89] found id: ""
	I0610 11:52:21.633528   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.633538   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:21.633546   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:21.633631   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:21.663999   57945 cri.go:89] found id: ""
	I0610 11:52:21.664055   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.664069   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:21.664078   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:21.664138   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:21.698105   57945 cri.go:89] found id: ""
	I0610 11:52:21.698136   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.698147   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:21.698155   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:21.698213   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:21.730036   57945 cri.go:89] found id: ""
	I0610 11:52:21.730061   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.730068   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:21.730074   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:21.730119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:21.764484   57945 cri.go:89] found id: ""
	I0610 11:52:21.764507   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.764515   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:21.764520   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:21.764575   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:21.797366   57945 cri.go:89] found id: ""
	I0610 11:52:21.797397   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.797408   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:21.797415   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:21.797478   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:21.832991   57945 cri.go:89] found id: ""
	I0610 11:52:21.833023   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.833030   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:21.833035   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:21.833081   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:21.868859   57945 cri.go:89] found id: ""
	I0610 11:52:21.868890   57945 logs.go:276] 0 containers: []
	W0610 11:52:21.868899   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:21.868924   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:21.868937   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:21.918976   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:21.919013   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:21.934602   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:21.934629   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:22.002888   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:22.002909   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:22.002920   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:22.082894   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:22.082941   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:18.439853   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:20.942040   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:20.692398   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:23.191532   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:24.620683   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:24.634200   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:24.634280   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:24.667181   57945 cri.go:89] found id: ""
	I0610 11:52:24.667209   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.667217   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:24.667222   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:24.667277   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:24.702114   57945 cri.go:89] found id: ""
	I0610 11:52:24.702142   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.702151   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:24.702158   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:24.702220   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:24.734464   57945 cri.go:89] found id: ""
	I0610 11:52:24.734488   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.734497   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:24.734502   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:24.734565   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:24.767074   57945 cri.go:89] found id: ""
	I0610 11:52:24.767124   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.767132   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:24.767138   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:24.767210   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:24.800328   57945 cri.go:89] found id: ""
	I0610 11:52:24.800358   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.800369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:24.800376   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:24.800442   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:24.837785   57945 cri.go:89] found id: ""
	I0610 11:52:24.837814   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.837822   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:24.837828   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:24.837878   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:24.874886   57945 cri.go:89] found id: ""
	I0610 11:52:24.874910   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.874917   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:24.874923   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:24.874968   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:24.912191   57945 cri.go:89] found id: ""
	I0610 11:52:24.912217   57945 logs.go:276] 0 containers: []
	W0610 11:52:24.912235   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:24.912247   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:24.912265   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:24.968229   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:24.968262   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:24.981018   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:24.981048   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:25.049879   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:25.049907   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:25.049922   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:25.135103   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:25.135156   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:23.440293   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:25.939540   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.201186   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:25.691136   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.691669   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:27.687667   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:27.700418   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:27.700486   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:27.733712   57945 cri.go:89] found id: ""
	I0610 11:52:27.733740   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.733749   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:27.733754   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:27.733839   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:27.774063   57945 cri.go:89] found id: ""
	I0610 11:52:27.774089   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.774100   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:27.774108   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:27.774169   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:27.813906   57945 cri.go:89] found id: ""
	I0610 11:52:27.813945   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.813956   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:27.813963   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:27.814031   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:27.845877   57945 cri.go:89] found id: ""
	I0610 11:52:27.845901   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.845909   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:27.845915   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:27.845961   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:27.880094   57945 cri.go:89] found id: ""
	I0610 11:52:27.880139   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.880148   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:27.880153   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:27.880206   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:27.914308   57945 cri.go:89] found id: ""
	I0610 11:52:27.914332   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.914342   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:27.914355   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:27.914420   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:27.949386   57945 cri.go:89] found id: ""
	I0610 11:52:27.949412   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.949423   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:27.949430   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:27.949490   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:27.983901   57945 cri.go:89] found id: ""
	I0610 11:52:27.983927   57945 logs.go:276] 0 containers: []
	W0610 11:52:27.983938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:27.983948   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:27.983963   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:28.032820   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:28.032853   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:28.046306   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:28.046332   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:28.120614   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:28.120642   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:28.120657   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:28.202182   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:28.202217   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:30.741274   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:30.754276   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:30.754358   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:30.789142   57945 cri.go:89] found id: ""
	I0610 11:52:30.789174   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.789185   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:30.789193   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:30.789255   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:30.822319   57945 cri.go:89] found id: ""
	I0610 11:52:30.822350   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.822362   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:30.822369   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:30.822428   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:30.853166   57945 cri.go:89] found id: ""
	I0610 11:52:30.853192   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.853199   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:30.853204   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:30.853271   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:30.892290   57945 cri.go:89] found id: ""
	I0610 11:52:30.892320   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.892331   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:30.892339   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:30.892401   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:30.938603   57945 cri.go:89] found id: ""
	I0610 11:52:30.938629   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.938639   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:30.938646   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:30.938703   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:30.994532   57945 cri.go:89] found id: ""
	I0610 11:52:30.994567   57945 logs.go:276] 0 containers: []
	W0610 11:52:30.994583   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:30.994589   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:30.994649   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:31.041818   57945 cri.go:89] found id: ""
	I0610 11:52:31.041847   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.041859   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:31.041867   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:31.041923   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:31.079897   57945 cri.go:89] found id: ""
	I0610 11:52:31.079927   57945 logs.go:276] 0 containers: []
	W0610 11:52:31.079938   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:31.079951   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:31.079967   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:31.092291   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:31.092321   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:31.163921   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:31.163943   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:31.163955   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:31.242247   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:31.242287   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:31.281257   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:31.281286   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:27.940743   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:30.440529   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:30.273256   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:30.192386   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:32.192470   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:34.691408   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:33.837783   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:33.851085   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:33.851164   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:33.885285   57945 cri.go:89] found id: ""
	I0610 11:52:33.885314   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.885324   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:33.885332   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:33.885391   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:33.924958   57945 cri.go:89] found id: ""
	I0610 11:52:33.924996   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.925006   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:33.925022   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:33.925083   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:33.958563   57945 cri.go:89] found id: ""
	I0610 11:52:33.958589   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.958598   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:33.958606   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:33.958665   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:33.991575   57945 cri.go:89] found id: ""
	I0610 11:52:33.991606   57945 logs.go:276] 0 containers: []
	W0610 11:52:33.991616   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:33.991624   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:33.991693   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:34.029700   57945 cri.go:89] found id: ""
	I0610 11:52:34.029729   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.029740   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:34.029748   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:34.029805   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:34.068148   57945 cri.go:89] found id: ""
	I0610 11:52:34.068183   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.068194   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:34.068201   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:34.068275   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:34.100735   57945 cri.go:89] found id: ""
	I0610 11:52:34.100760   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.100767   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:34.100772   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:34.100817   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:34.132898   57945 cri.go:89] found id: ""
	I0610 11:52:34.132927   57945 logs.go:276] 0 containers: []
	W0610 11:52:34.132937   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:34.132958   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:34.132972   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:34.184690   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:34.184723   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:34.199604   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:34.199641   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:34.270744   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:34.270763   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:34.270775   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:34.352291   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:34.352334   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:36.894188   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:36.914098   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:36.914158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:36.957378   57945 cri.go:89] found id: ""
	I0610 11:52:36.957408   57945 logs.go:276] 0 containers: []
	W0610 11:52:36.957419   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:36.957427   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:36.957498   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:37.003576   57945 cri.go:89] found id: ""
	I0610 11:52:37.003602   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.003611   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:37.003618   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:37.003677   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:37.040221   57945 cri.go:89] found id: ""
	I0610 11:52:37.040245   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.040253   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:37.040259   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:37.040307   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:37.078151   57945 cri.go:89] found id: ""
	I0610 11:52:37.078185   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.078195   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:37.078202   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:37.078261   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:37.117446   57945 cri.go:89] found id: ""
	I0610 11:52:37.117468   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.117476   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:37.117482   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:37.117548   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:37.155320   57945 cri.go:89] found id: ""
	I0610 11:52:37.155344   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.155356   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:37.155364   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:37.155414   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:37.192194   57945 cri.go:89] found id: ""
	I0610 11:52:37.192221   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.192230   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:37.192238   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:37.192303   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:37.225567   57945 cri.go:89] found id: ""
	I0610 11:52:37.225594   57945 logs.go:276] 0 containers: []
	W0610 11:52:37.225605   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:37.225616   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:37.225632   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:37.240139   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:37.240164   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:52:32.940571   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:34.940672   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:37.440898   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:36.353199   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:36.697419   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:39.190952   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	W0610 11:52:37.307754   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:37.307784   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:37.307801   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:37.385929   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:37.385964   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:37.424991   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:37.425029   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:39.974839   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:39.988788   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:39.988858   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:40.025922   57945 cri.go:89] found id: ""
	I0610 11:52:40.025947   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.025954   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:40.025967   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:40.026026   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:40.062043   57945 cri.go:89] found id: ""
	I0610 11:52:40.062076   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.062085   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:40.062094   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:40.062158   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:40.095441   57945 cri.go:89] found id: ""
	I0610 11:52:40.095465   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.095472   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:40.095478   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:40.095529   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:40.127633   57945 cri.go:89] found id: ""
	I0610 11:52:40.127662   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.127672   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:40.127680   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:40.127740   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:40.161232   57945 cri.go:89] found id: ""
	I0610 11:52:40.161257   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.161267   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:40.161274   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:40.161334   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:40.194491   57945 cri.go:89] found id: ""
	I0610 11:52:40.194521   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.194529   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:40.194535   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:40.194583   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:40.226376   57945 cri.go:89] found id: ""
	I0610 11:52:40.226404   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.226411   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:40.226416   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:40.226465   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:40.257938   57945 cri.go:89] found id: ""
	I0610 11:52:40.257968   57945 logs.go:276] 0 containers: []
	W0610 11:52:40.257978   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:40.257988   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:40.258004   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:40.327247   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:40.327276   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:40.327291   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:40.404231   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:40.404263   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:40.441554   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:40.441585   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:40.491952   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:40.491987   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:39.939538   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:41.939639   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:39.425159   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:41.191808   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:43.695646   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:43.006217   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:43.019113   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:43.019187   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:43.053010   57945 cri.go:89] found id: ""
	I0610 11:52:43.053035   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.053045   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:43.053051   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:43.053115   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:43.086118   57945 cri.go:89] found id: ""
	I0610 11:52:43.086145   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.086156   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:43.086171   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:43.086235   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:43.117892   57945 cri.go:89] found id: ""
	I0610 11:52:43.117919   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.117929   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:43.117937   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:43.118011   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:43.149751   57945 cri.go:89] found id: ""
	I0610 11:52:43.149777   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.149787   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:43.149795   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:43.149855   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:43.184215   57945 cri.go:89] found id: ""
	I0610 11:52:43.184250   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.184261   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:43.184268   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:43.184332   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:43.219758   57945 cri.go:89] found id: ""
	I0610 11:52:43.219787   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.219797   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:43.219805   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:43.219868   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:43.250698   57945 cri.go:89] found id: ""
	I0610 11:52:43.250728   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.250738   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:43.250746   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:43.250803   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:43.286526   57945 cri.go:89] found id: ""
	I0610 11:52:43.286556   57945 logs.go:276] 0 containers: []
	W0610 11:52:43.286566   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:43.286576   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:43.286589   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:43.362219   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:43.362255   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:43.398332   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:43.398366   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:43.449468   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:43.449502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:43.462346   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:43.462381   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:43.539578   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:46.039720   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:46.052749   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:52:46.052821   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:52:46.093110   57945 cri.go:89] found id: ""
	I0610 11:52:46.093139   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.093147   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:52:46.093152   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:52:46.093219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:52:46.130885   57945 cri.go:89] found id: ""
	I0610 11:52:46.130916   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.130924   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:52:46.130930   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:52:46.130977   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:52:46.167471   57945 cri.go:89] found id: ""
	I0610 11:52:46.167507   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.167524   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:52:46.167531   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:52:46.167593   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:52:46.204776   57945 cri.go:89] found id: ""
	I0610 11:52:46.204799   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.204807   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:52:46.204812   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:52:46.204860   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:52:46.244826   57945 cri.go:89] found id: ""
	I0610 11:52:46.244859   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.244869   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:52:46.244876   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:52:46.244942   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:52:46.281757   57945 cri.go:89] found id: ""
	I0610 11:52:46.281783   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.281791   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:52:46.281797   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:52:46.281844   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:52:46.319517   57945 cri.go:89] found id: ""
	I0610 11:52:46.319546   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.319558   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:52:46.319566   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:52:46.319636   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:52:46.355806   57945 cri.go:89] found id: ""
	I0610 11:52:46.355835   57945 logs.go:276] 0 containers: []
	W0610 11:52:46.355846   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:52:46.355858   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:52:46.355872   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:52:46.433087   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:52:46.433131   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:52:46.468792   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:52:46.468829   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:52:46.517931   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:52:46.517969   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:52:46.530892   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:52:46.530935   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:52:46.592585   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:52:43.940733   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:46.440354   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:45.505281   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:48.577214   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:46.191520   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:48.691214   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:49.093662   57945 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:52:49.106539   57945 kubeadm.go:591] duration metric: took 4m4.396325615s to restartPrimaryControlPlane
	W0610 11:52:49.106625   57945 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 11:52:49.106658   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:52:48.441202   57572 pod_ready.go:102] pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:50.433923   57572 pod_ready.go:81] duration metric: took 4m0.000312516s for pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace to be "Ready" ...
	E0610 11:52:50.433960   57572 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-hg4j8" in "kube-system" namespace to be "Ready" (will not retry!)
	I0610 11:52:50.433982   57572 pod_ready.go:38] duration metric: took 4m5.113212783s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:52:50.434008   57572 kubeadm.go:591] duration metric: took 4m16.406085019s to restartPrimaryControlPlane
	W0610 11:52:50.434091   57572 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 11:52:50.434128   57572 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:52:53.503059   57945 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.396374472s)
	I0610 11:52:53.503148   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:52:53.518235   57945 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:52:53.529298   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:52:53.539273   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:52:53.539297   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:52:53.539341   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:52:53.548285   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:52:53.548354   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:52:53.557659   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:52:53.569253   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:52:53.569330   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:52:53.579689   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.589800   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:52:53.589865   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:52:53.600324   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:52:53.610542   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:52:53.610612   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:52:53.620144   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:52:53.687195   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:52:53.687302   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:52:53.851035   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:52:53.851178   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:52:53.851305   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:52:54.037503   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:52:54.039523   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:52:54.039621   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:52:54.039718   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:52:54.039850   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:52:54.039959   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:52:54.040055   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:52:54.040135   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:52:54.040233   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:52:54.040506   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:52:54.040892   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:52:54.041344   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:52:54.041411   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:52:54.041507   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:52:54.151486   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:52:54.389555   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:52:54.507653   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:52:54.690886   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:52:54.708542   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:52:54.712251   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:52:54.712504   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:52:54.872755   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:52:50.691517   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:53.191418   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:54.874801   57945 out.go:204]   - Booting up control plane ...
	I0610 11:52:54.874978   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:52:54.883224   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:52:54.885032   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:52:54.886182   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:52:54.891030   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:52:54.661214   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:57.729160   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:52:55.691987   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:52:58.192548   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:00.692060   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:03.192673   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:03.809217   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:06.885176   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:05.692004   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:07.692545   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:12.961318   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:10.191064   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:12.192258   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:14.691564   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:16.033278   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:16.691670   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:18.691801   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:21.778313   57572 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.344150357s)
	I0610 11:53:21.778398   57572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:21.793960   57572 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:53:21.803952   57572 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:53:21.813685   57572 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:53:21.813709   57572 kubeadm.go:156] found existing configuration files:
	
	I0610 11:53:21.813758   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:53:21.823957   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:53:21.824027   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:53:21.833125   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:53:21.841834   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:53:21.841893   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:53:21.850999   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:53:21.859858   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:53:21.859920   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:53:21.869076   57572 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:53:21.877079   57572 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:53:21.877141   57572 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:53:21.887614   57572 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:53:21.941932   57572 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 11:53:21.941987   57572 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:53:22.084118   57572 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:53:22.084219   57572 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:53:22.084310   57572 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:53:22.287685   57572 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:53:22.289568   57572 out.go:204]   - Generating certificates and keys ...
	I0610 11:53:22.289674   57572 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:53:22.289779   57572 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:53:22.289917   57572 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:53:22.290032   57572 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:53:22.290144   57572 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:53:22.290234   57572 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:53:22.290339   57572 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:53:22.290439   57572 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:53:22.290558   57572 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:53:22.290674   57572 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:53:22.290732   57572 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:53:22.290819   57572 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:53:22.354674   57572 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:53:22.573948   57572 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 11:53:22.805694   57572 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:53:22.914740   57572 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:53:23.218887   57572 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:53:23.221479   57572 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:53:23.223937   57572 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:53:22.113312   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:20.692241   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:23.192124   56769 pod_ready.go:102] pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace has status "Ready":"False"
	I0610 11:53:23.695912   56769 pod_ready.go:81] duration metric: took 4m0.01073501s for pod "metrics-server-569cc877fc-5zg8j" in "kube-system" namespace to be "Ready" ...
	E0610 11:53:23.695944   56769 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0610 11:53:23.695954   56769 pod_ready.go:38] duration metric: took 4m2.412094982s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:23.695972   56769 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:53:23.696001   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:23.696058   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:23.758822   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:23.758850   56769 cri.go:89] found id: ""
	I0610 11:53:23.758860   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:23.758921   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.765128   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:23.765198   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:23.798454   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:23.798483   56769 cri.go:89] found id: ""
	I0610 11:53:23.798494   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:23.798560   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.802985   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:23.803051   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:23.855781   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:23.855810   56769 cri.go:89] found id: ""
	I0610 11:53:23.855819   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:23.855873   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.860285   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:23.860363   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:23.901849   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:23.901868   56769 cri.go:89] found id: ""
	I0610 11:53:23.901878   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:23.901935   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.906116   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:23.906183   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:23.941376   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:23.941396   56769 cri.go:89] found id: ""
	I0610 11:53:23.941405   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:23.941463   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.947379   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:23.947450   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:23.984733   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:23.984757   56769 cri.go:89] found id: ""
	I0610 11:53:23.984766   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:23.984839   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:23.988701   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:23.988752   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:24.024067   56769 cri.go:89] found id: ""
	I0610 11:53:24.024094   56769 logs.go:276] 0 containers: []
	W0610 11:53:24.024103   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:24.024110   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:24.024170   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:24.058220   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:24.058250   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:24.058255   56769 cri.go:89] found id: ""
	I0610 11:53:24.058263   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:24.058321   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:24.062072   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:24.065706   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:24.065723   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:24.104622   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:24.104652   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:24.142432   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:24.142457   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:24.670328   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:24.670375   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:24.726557   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:24.726592   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:24.769111   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:24.769150   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:24.811199   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:24.811246   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:24.876489   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:24.876547   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:23.225694   57572 out.go:204]   - Booting up control plane ...
	I0610 11:53:23.225803   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:53:23.225898   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:53:23.226004   57572 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:53:23.245138   57572 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:53:23.246060   57572 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:53:23.246121   57572 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:53:23.375562   57572 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 11:53:23.375689   57572 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 11:53:23.877472   57572 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.888048ms
	I0610 11:53:23.877560   57572 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 11:53:25.185274   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:28.879976   57572 kubeadm.go:309] [api-check] The API server is healthy after 5.002334008s
	I0610 11:53:28.902382   57572 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 11:53:28.924552   57572 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 11:53:28.956686   57572 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 11:53:28.956958   57572 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-298179 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 11:53:28.971883   57572 kubeadm.go:309] [bootstrap-token] Using token: zdzp8m.ttyzgfzbws24vbk8
	I0610 11:53:24.916641   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:24.916824   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:24.980737   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:24.980779   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:24.998139   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:24.998163   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:25.113809   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:25.113839   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:25.168214   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:25.168254   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:27.708296   56769 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:53:27.730996   56769 api_server.go:72] duration metric: took 4m14.155149231s to wait for apiserver process to appear ...
	I0610 11:53:27.731021   56769 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:53:27.731057   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:27.731116   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:27.767385   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:27.767411   56769 cri.go:89] found id: ""
	I0610 11:53:27.767420   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:27.767465   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.771646   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:27.771723   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:27.806969   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:27.806996   56769 cri.go:89] found id: ""
	I0610 11:53:27.807005   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:27.807060   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.811580   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:27.811655   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:27.850853   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:27.850879   56769 cri.go:89] found id: ""
	I0610 11:53:27.850888   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:27.850947   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.855284   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:27.855347   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:27.901228   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:27.901256   56769 cri.go:89] found id: ""
	I0610 11:53:27.901266   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:27.901322   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.905361   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:27.905428   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:27.943162   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:27.943187   56769 cri.go:89] found id: ""
	I0610 11:53:27.943197   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:27.943251   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:27.951934   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:27.952015   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:27.996288   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:27.996316   56769 cri.go:89] found id: ""
	I0610 11:53:27.996325   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:27.996381   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.000307   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:28.000378   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:28.036978   56769 cri.go:89] found id: ""
	I0610 11:53:28.037016   56769 logs.go:276] 0 containers: []
	W0610 11:53:28.037026   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:28.037033   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:28.037091   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:28.078338   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:28.078363   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:28.078368   56769 cri.go:89] found id: ""
	I0610 11:53:28.078377   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:28.078433   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.082899   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:28.087382   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:28.087416   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:28.123014   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:28.123051   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:28.186128   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:28.186160   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:28.314495   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:28.314539   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:28.358953   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:28.358981   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:28.394280   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:28.394306   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:28.450138   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:28.450172   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:28.851268   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:28.851307   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:28.909176   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:28.909202   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:28.927322   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:28.927359   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:28.983941   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:28.983971   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:29.023327   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:29.023352   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:29.063624   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:29.063655   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:28.973316   57572 out.go:204]   - Configuring RBAC rules ...
	I0610 11:53:28.973437   57572 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 11:53:28.979726   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 11:53:28.989075   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 11:53:28.999678   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 11:53:29.005717   57572 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 11:53:29.014439   57572 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 11:53:29.292088   57572 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 11:53:29.734969   57572 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 11:53:30.288723   57572 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 11:53:30.289824   57572 kubeadm.go:309] 
	I0610 11:53:30.289918   57572 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 11:53:30.289930   57572 kubeadm.go:309] 
	I0610 11:53:30.290061   57572 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 11:53:30.290078   57572 kubeadm.go:309] 
	I0610 11:53:30.290107   57572 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 11:53:30.290191   57572 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 11:53:30.290268   57572 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 11:53:30.290316   57572 kubeadm.go:309] 
	I0610 11:53:30.290402   57572 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 11:53:30.290412   57572 kubeadm.go:309] 
	I0610 11:53:30.290481   57572 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 11:53:30.290494   57572 kubeadm.go:309] 
	I0610 11:53:30.290539   57572 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 11:53:30.290602   57572 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 11:53:30.290659   57572 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 11:53:30.290666   57572 kubeadm.go:309] 
	I0610 11:53:30.290749   57572 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 11:53:30.290816   57572 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 11:53:30.290823   57572 kubeadm.go:309] 
	I0610 11:53:30.290901   57572 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zdzp8m.ttyzgfzbws24vbk8 \
	I0610 11:53:30.291011   57572 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 11:53:30.291032   57572 kubeadm.go:309] 	--control-plane 
	I0610 11:53:30.291038   57572 kubeadm.go:309] 
	I0610 11:53:30.291113   57572 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 11:53:30.291120   57572 kubeadm.go:309] 
	I0610 11:53:30.291230   57572 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zdzp8m.ttyzgfzbws24vbk8 \
	I0610 11:53:30.291370   57572 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 11:53:30.291895   57572 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:53:30.291925   57572 cni.go:84] Creating CNI manager for ""
	I0610 11:53:30.291936   57572 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:53:30.294227   57572 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 11:53:30.295470   57572 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 11:53:30.306011   57572 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 11:53:30.322832   57572 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 11:53:30.322890   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:30.322960   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-298179 minikube.k8s.io/updated_at=2024_06_10T11_53_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=no-preload-298179 minikube.k8s.io/primary=true
	I0610 11:53:30.486915   57572 ops.go:34] apiserver oom_adj: -16
	I0610 11:53:30.487320   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:30.988103   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.488094   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.988314   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:32.487603   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:31.265182   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:31.597111   56769 api_server.go:253] Checking apiserver healthz at https://192.168.61.19:8443/healthz ...
	I0610 11:53:31.601589   56769 api_server.go:279] https://192.168.61.19:8443/healthz returned 200:
	ok
	I0610 11:53:31.602609   56769 api_server.go:141] control plane version: v1.30.1
	I0610 11:53:31.602631   56769 api_server.go:131] duration metric: took 3.871604169s to wait for apiserver health ...
	I0610 11:53:31.602639   56769 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:53:31.602663   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:53:31.602716   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:53:31.650102   56769 cri.go:89] found id: "61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:31.650130   56769 cri.go:89] found id: ""
	I0610 11:53:31.650139   56769 logs.go:276] 1 containers: [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29]
	I0610 11:53:31.650197   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.654234   56769 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:53:31.654299   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:53:31.690704   56769 cri.go:89] found id: "0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:31.690736   56769 cri.go:89] found id: ""
	I0610 11:53:31.690750   56769 logs.go:276] 1 containers: [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c]
	I0610 11:53:31.690810   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.695139   56769 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:53:31.695209   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:53:31.732593   56769 cri.go:89] found id: "04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:31.732614   56769 cri.go:89] found id: ""
	I0610 11:53:31.732621   56769 logs.go:276] 1 containers: [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933]
	I0610 11:53:31.732667   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.737201   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:53:31.737277   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:53:31.774177   56769 cri.go:89] found id: "7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:31.774219   56769 cri.go:89] found id: ""
	I0610 11:53:31.774239   56769 logs.go:276] 1 containers: [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9]
	I0610 11:53:31.774300   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.778617   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:53:31.778695   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:53:31.816633   56769 cri.go:89] found id: "3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:31.816657   56769 cri.go:89] found id: ""
	I0610 11:53:31.816665   56769 logs.go:276] 1 containers: [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb]
	I0610 11:53:31.816715   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.820846   56769 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:53:31.820928   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:53:31.857021   56769 cri.go:89] found id: "7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:31.857052   56769 cri.go:89] found id: ""
	I0610 11:53:31.857062   56769 logs.go:276] 1 containers: [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43]
	I0610 11:53:31.857127   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.862825   56769 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:53:31.862888   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:53:31.903792   56769 cri.go:89] found id: ""
	I0610 11:53:31.903817   56769 logs.go:276] 0 containers: []
	W0610 11:53:31.903825   56769 logs.go:278] No container was found matching "kindnet"
	I0610 11:53:31.903837   56769 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0610 11:53:31.903885   56769 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0610 11:53:31.942392   56769 cri.go:89] found id: "5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:31.942414   56769 cri.go:89] found id: "8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:31.942419   56769 cri.go:89] found id: ""
	I0610 11:53:31.942428   56769 logs.go:276] 2 containers: [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262]
	I0610 11:53:31.942481   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.949047   56769 ssh_runner.go:195] Run: which crictl
	I0610 11:53:31.953590   56769 logs.go:123] Gathering logs for kube-scheduler [7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9] ...
	I0610 11:53:31.953625   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7afbab9bcf1ac3fe2c221a82c5f13b2a16c7b8714801bf6ca6a4a5a9b8d8d7f9"
	I0610 11:53:31.991926   56769 logs.go:123] Gathering logs for kube-controller-manager [7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43] ...
	I0610 11:53:31.991954   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7badb7b66c71f8071b98d3dd1ee8bc9f7cb67227569f5545baa451047c072a43"
	I0610 11:53:32.040857   56769 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:53:32.040894   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:53:32.432680   56769 logs.go:123] Gathering logs for container status ...
	I0610 11:53:32.432731   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0610 11:53:32.474819   56769 logs.go:123] Gathering logs for kubelet ...
	I0610 11:53:32.474849   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:53:32.530152   56769 logs.go:123] Gathering logs for dmesg ...
	I0610 11:53:32.530189   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:53:32.547698   56769 logs.go:123] Gathering logs for etcd [0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c] ...
	I0610 11:53:32.547735   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c16d9960d9ab36a0c55b0f72fd2406a87b141095804d2afe0d246fffdd6fd6c"
	I0610 11:53:32.598580   56769 logs.go:123] Gathering logs for kube-proxy [3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb] ...
	I0610 11:53:32.598634   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7292ccdd40d527454d5c8987e39ad25983f85ba302e740d025152ce2321acb"
	I0610 11:53:32.643864   56769 logs.go:123] Gathering logs for storage-provisioner [5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e] ...
	I0610 11:53:32.643900   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5509696f5a811891e62a9d64b9e8834b4457c6100b479f939f09a9cb7ecb1a5e"
	I0610 11:53:32.679085   56769 logs.go:123] Gathering logs for storage-provisioner [8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262] ...
	I0610 11:53:32.679118   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d8bc4b6855e1c4ed78726284ae93940fff9cac1278228421fb8cfd56b896262"
	I0610 11:53:32.714247   56769 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:53:32.714279   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0610 11:53:32.818508   56769 logs.go:123] Gathering logs for kube-apiserver [61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29] ...
	I0610 11:53:32.818551   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61727f8f43e1d3070f353f32b17b31f3d0e7c1aac8361b08e4f7618a511a3d29"
	I0610 11:53:32.862390   56769 logs.go:123] Gathering logs for coredns [04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933] ...
	I0610 11:53:32.862424   56769 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04ef0964178aeb58a0851cc11f871d0f16cfe9e814855179e52e1d824476a933"
	I0610 11:53:35.408169   56769 system_pods.go:59] 8 kube-system pods found
	I0610 11:53:35.408198   56769 system_pods.go:61] "coredns-7db6d8ff4d-7dlzb" [4b2618cd-b48c-44bd-a07d-4fe4585a14fa] Running
	I0610 11:53:35.408203   56769 system_pods.go:61] "etcd-embed-certs-832735" [4b7d413d-9a2a-4677-b279-5a6d39904679] Running
	I0610 11:53:35.408208   56769 system_pods.go:61] "kube-apiserver-embed-certs-832735" [7e11e03e-7b15-4e9b-8f9a-9a46d7aadd7e] Running
	I0610 11:53:35.408211   56769 system_pods.go:61] "kube-controller-manager-embed-certs-832735" [75aa996d-fdf3-4c32-b25d-03c7582b3502] Running
	I0610 11:53:35.408215   56769 system_pods.go:61] "kube-proxy-b7x2p" [fe1cd055-691f-46b1-ada7-7dded31d2308] Running
	I0610 11:53:35.408218   56769 system_pods.go:61] "kube-scheduler-embed-certs-832735" [b7a7fcfb-7ce9-4470-9052-79bc13029408] Running
	I0610 11:53:35.408223   56769 system_pods.go:61] "metrics-server-569cc877fc-5zg8j" [e979b4b0-356d-479d-990f-d9e6e46a1a9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:35.408233   56769 system_pods.go:61] "storage-provisioner" [47aa143e-3545-492d-ac93-e62f0076e0f4] Running
	I0610 11:53:35.408241   56769 system_pods.go:74] duration metric: took 3.805596332s to wait for pod list to return data ...
	I0610 11:53:35.408248   56769 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:53:35.410634   56769 default_sa.go:45] found service account: "default"
	I0610 11:53:35.410659   56769 default_sa.go:55] duration metric: took 2.405735ms for default service account to be created ...
	I0610 11:53:35.410667   56769 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:53:35.415849   56769 system_pods.go:86] 8 kube-system pods found
	I0610 11:53:35.415871   56769 system_pods.go:89] "coredns-7db6d8ff4d-7dlzb" [4b2618cd-b48c-44bd-a07d-4fe4585a14fa] Running
	I0610 11:53:35.415876   56769 system_pods.go:89] "etcd-embed-certs-832735" [4b7d413d-9a2a-4677-b279-5a6d39904679] Running
	I0610 11:53:35.415881   56769 system_pods.go:89] "kube-apiserver-embed-certs-832735" [7e11e03e-7b15-4e9b-8f9a-9a46d7aadd7e] Running
	I0610 11:53:35.415886   56769 system_pods.go:89] "kube-controller-manager-embed-certs-832735" [75aa996d-fdf3-4c32-b25d-03c7582b3502] Running
	I0610 11:53:35.415890   56769 system_pods.go:89] "kube-proxy-b7x2p" [fe1cd055-691f-46b1-ada7-7dded31d2308] Running
	I0610 11:53:35.415894   56769 system_pods.go:89] "kube-scheduler-embed-certs-832735" [b7a7fcfb-7ce9-4470-9052-79bc13029408] Running
	I0610 11:53:35.415900   56769 system_pods.go:89] "metrics-server-569cc877fc-5zg8j" [e979b4b0-356d-479d-990f-d9e6e46a1a9b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:35.415906   56769 system_pods.go:89] "storage-provisioner" [47aa143e-3545-492d-ac93-e62f0076e0f4] Running
	I0610 11:53:35.415913   56769 system_pods.go:126] duration metric: took 5.241641ms to wait for k8s-apps to be running ...
	I0610 11:53:35.415919   56769 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:53:35.415957   56769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:35.431179   56769 system_svc.go:56] duration metric: took 15.252123ms WaitForService to wait for kubelet
	I0610 11:53:35.431209   56769 kubeadm.go:576] duration metric: took 4m21.85536785s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:53:35.431233   56769 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:53:35.433918   56769 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:53:35.433941   56769 node_conditions.go:123] node cpu capacity is 2
	I0610 11:53:35.433955   56769 node_conditions.go:105] duration metric: took 2.718538ms to run NodePressure ...
	I0610 11:53:35.433966   56769 start.go:240] waiting for startup goroutines ...
	I0610 11:53:35.433973   56769 start.go:245] waiting for cluster config update ...
	I0610 11:53:35.433982   56769 start.go:254] writing updated cluster config ...
	I0610 11:53:35.434234   56769 ssh_runner.go:195] Run: rm -f paused
	I0610 11:53:35.483552   56769 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:53:35.485459   56769 out.go:177] * Done! kubectl is now configured to use "embed-certs-832735" cluster and "default" namespace by default
	I0610 11:53:34.892890   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:53:34.893019   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:34.893195   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:32.987749   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:33.488008   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:33.988419   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.488002   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.988349   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:35.487347   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:35.987479   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:36.487972   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:36.987442   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:37.488069   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:34.337236   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:39.893441   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:39.893640   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:37.987751   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:38.488215   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:38.987955   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:39.487394   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:39.987431   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:40.488304   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:40.987779   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:41.488123   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:41.987438   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:42.487799   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:42.987548   57572 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 11:53:43.084050   57572 kubeadm.go:1107] duration metric: took 12.761214532s to wait for elevateKubeSystemPrivileges
	W0610 11:53:43.084095   57572 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 11:53:43.084109   57572 kubeadm.go:393] duration metric: took 5m9.100565129s to StartCluster
	I0610 11:53:43.084128   57572 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:53:43.084215   57572 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:53:43.085889   57572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:53:43.086151   57572 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.48 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 11:53:43.087762   57572 out.go:177] * Verifying Kubernetes components...
	I0610 11:53:43.086215   57572 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 11:53:43.087796   57572 addons.go:69] Setting storage-provisioner=true in profile "no-preload-298179"
	I0610 11:53:43.087800   57572 addons.go:69] Setting default-storageclass=true in profile "no-preload-298179"
	I0610 11:53:43.087819   57572 addons.go:234] Setting addon storage-provisioner=true in "no-preload-298179"
	W0610 11:53:43.087825   57572 addons.go:243] addon storage-provisioner should already be in state true
	I0610 11:53:43.087832   57572 addons.go:69] Setting metrics-server=true in profile "no-preload-298179"
	I0610 11:53:43.087847   57572 addons.go:234] Setting addon metrics-server=true in "no-preload-298179"
	W0610 11:53:43.087856   57572 addons.go:243] addon metrics-server should already be in state true
	I0610 11:53:43.087826   57572 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-298179"
	I0610 11:53:43.087878   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.089535   57572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:53:43.087856   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.086356   57572 config.go:182] Loaded profile config "no-preload-298179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:53:43.088180   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.088182   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.089687   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.089713   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.089869   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.089895   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.104587   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0610 11:53:43.104609   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44535
	I0610 11:53:43.104586   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34031
	I0610 11:53:43.105501   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105566   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105508   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.105983   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.105997   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106134   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.106153   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106160   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.106184   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.106350   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106526   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106568   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.106692   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.106890   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.106918   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.107118   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.107141   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.109645   57572 addons.go:234] Setting addon default-storageclass=true in "no-preload-298179"
	W0610 11:53:43.109664   57572 addons.go:243] addon default-storageclass should already be in state true
	I0610 11:53:43.109692   57572 host.go:66] Checking if "no-preload-298179" exists ...
	I0610 11:53:43.109914   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.109939   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.123209   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45401
	I0610 11:53:43.123703   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.124011   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0610 11:53:43.124351   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.124372   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.124393   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.124777   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.124847   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.124869   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.124998   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.125208   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.125941   57572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:53:43.125994   57572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:53:43.126208   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35175
	I0610 11:53:43.126555   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.126915   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.127030   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.127038   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.129007   57572 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0610 11:53:43.127369   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.130329   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 11:53:43.130349   57572 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 11:53:43.130372   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.130501   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.132699   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.134359   57572 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 11:53:40.417218   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:43.489341   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:43.135801   57572 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:53:43.135818   57572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 11:53:43.135837   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.134045   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.135918   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.135948   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.134772   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.136159   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.136318   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.136621   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.139217   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.139636   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.139658   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.140091   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.140568   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.140865   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.141293   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.145179   57572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40713
	I0610 11:53:43.145813   57572 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:53:43.146336   57572 main.go:141] libmachine: Using API Version  1
	I0610 11:53:43.146358   57572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:53:43.146675   57572 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:53:43.146987   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetState
	I0610 11:53:43.148747   57572 main.go:141] libmachine: (no-preload-298179) Calling .DriverName
	I0610 11:53:43.149026   57572 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 11:53:43.149042   57572 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 11:53:43.149064   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHHostname
	I0610 11:53:43.152048   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.152550   57572 main.go:141] libmachine: (no-preload-298179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:72:68", ip: ""} in network mk-no-preload-298179: {Iface:virbr2 ExpiryTime:2024-06-10 12:48:08 +0000 UTC Type:0 Mac:52:54:00:92:72:68 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:no-preload-298179 Clientid:01:52:54:00:92:72:68}
	I0610 11:53:43.152572   57572 main.go:141] libmachine: (no-preload-298179) DBG | domain no-preload-298179 has defined IP address 192.168.39.48 and MAC address 52:54:00:92:72:68 in network mk-no-preload-298179
	I0610 11:53:43.152780   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHPort
	I0610 11:53:43.153021   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHKeyPath
	I0610 11:53:43.153256   57572 main.go:141] libmachine: (no-preload-298179) Calling .GetSSHUsername
	I0610 11:53:43.153406   57572 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/no-preload-298179/id_rsa Username:docker}
	I0610 11:53:43.293079   57572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:53:43.323699   57572 node_ready.go:35] waiting up to 6m0s for node "no-preload-298179" to be "Ready" ...
	I0610 11:53:43.331922   57572 node_ready.go:49] node "no-preload-298179" has status "Ready":"True"
	I0610 11:53:43.331946   57572 node_ready.go:38] duration metric: took 8.212434ms for node "no-preload-298179" to be "Ready" ...
	I0610 11:53:43.331956   57572 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:43.338721   57572 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:43.399175   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 11:53:43.399196   57572 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0610 11:53:43.432920   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 11:53:43.432986   57572 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 11:53:43.453982   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 11:53:43.457146   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 11:53:43.500871   57572 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 11:53:43.500900   57572 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 11:53:43.601303   57572 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 11:53:44.376916   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.376992   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377083   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377105   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377298   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377377   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.377383   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.377301   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377394   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377403   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377405   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.377414   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.377421   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.377608   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.377634   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.379039   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.379090   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.379054   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.397328   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.397354   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.397626   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.397644   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.880094   57572 pod_ready.go:92] pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.880129   57572 pod_ready.go:81] duration metric: took 1.541384627s for pod "coredns-7db6d8ff4d-9mqrm" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.880149   57572 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.901625   57572 pod_ready.go:92] pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.901649   57572 pod_ready.go:81] duration metric: took 21.492207ms for pod "coredns-7db6d8ff4d-f622z" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.901658   57572 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.907530   57572 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.306184796s)
	I0610 11:53:44.907587   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.907603   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.907929   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.907991   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.908005   57572 main.go:141] libmachine: Making call to close driver server
	I0610 11:53:44.908015   57572 main.go:141] libmachine: (no-preload-298179) Calling .Close
	I0610 11:53:44.908262   57572 main.go:141] libmachine: Successfully made call to close driver server
	I0610 11:53:44.908301   57572 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 11:53:44.908305   57572 main.go:141] libmachine: (no-preload-298179) DBG | Closing plugin on server side
	I0610 11:53:44.908315   57572 addons.go:475] Verifying addon metrics-server=true in "no-preload-298179"
	I0610 11:53:44.910622   57572 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0610 11:53:44.911848   57572 addons.go:510] duration metric: took 1.825630817s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0610 11:53:44.922534   57572 pod_ready.go:92] pod "etcd-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.922562   57572 pod_ready.go:81] duration metric: took 20.896794ms for pod "etcd-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.922576   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.947545   57572 pod_ready.go:92] pod "kube-apiserver-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.947569   57572 pod_ready.go:81] duration metric: took 24.984822ms for pod "kube-apiserver-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.947578   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.956216   57572 pod_ready.go:92] pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:44.956240   57572 pod_ready.go:81] duration metric: took 8.656291ms for pod "kube-controller-manager-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:44.956256   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fhndh" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.326936   57572 pod_ready.go:92] pod "kube-proxy-fhndh" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:45.326977   57572 pod_ready.go:81] duration metric: took 370.713967ms for pod "kube-proxy-fhndh" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.326987   57572 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.733487   57572 pod_ready.go:92] pod "kube-scheduler-no-preload-298179" in "kube-system" namespace has status "Ready":"True"
	I0610 11:53:45.733514   57572 pod_ready.go:81] duration metric: took 406.51925ms for pod "kube-scheduler-no-preload-298179" in "kube-system" namespace to be "Ready" ...
	I0610 11:53:45.733525   57572 pod_ready.go:38] duration metric: took 2.401559014s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:53:45.733544   57572 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:53:45.733612   57572 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:53:45.754814   57572 api_server.go:72] duration metric: took 2.668628419s to wait for apiserver process to appear ...
	I0610 11:53:45.754838   57572 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:53:45.754867   57572 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I0610 11:53:45.763742   57572 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I0610 11:53:45.765314   57572 api_server.go:141] control plane version: v1.30.1
	I0610 11:53:45.765345   57572 api_server.go:131] duration metric: took 10.498726ms to wait for apiserver health ...
	I0610 11:53:45.765356   57572 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:53:45.930764   57572 system_pods.go:59] 9 kube-system pods found
	I0610 11:53:45.930792   57572 system_pods.go:61] "coredns-7db6d8ff4d-9mqrm" [6269d670-dffa-4526-8117-0b44df04554a] Running
	I0610 11:53:45.930796   57572 system_pods.go:61] "coredns-7db6d8ff4d-f622z" [16cb4de3-afa9-4e45-bc85-e51273973808] Running
	I0610 11:53:45.930800   57572 system_pods.go:61] "etcd-no-preload-298179" [088f1950-04c4-49e0-b3e2-fe8b5f398a08] Running
	I0610 11:53:45.930806   57572 system_pods.go:61] "kube-apiserver-no-preload-298179" [11bad142-25ff-4aa9-9d9e-4b7cbb053bdd] Running
	I0610 11:53:45.930810   57572 system_pods.go:61] "kube-controller-manager-no-preload-298179" [ac29a4d9-6e9c-44fd-bb39-477255b94d0c] Running
	I0610 11:53:45.930814   57572 system_pods.go:61] "kube-proxy-fhndh" [50f848e7-44f6-4ab1-bf94-3189733abca2] Running
	I0610 11:53:45.930818   57572 system_pods.go:61] "kube-scheduler-no-preload-298179" [8569c375-b9bd-4a46-91ea-c6372056e45d] Running
	I0610 11:53:45.930826   57572 system_pods.go:61] "metrics-server-569cc877fc-jp7dr" [21136ae9-40d8-4857-aca5-47e3fa3b7e9c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:45.930831   57572 system_pods.go:61] "storage-provisioner" [783f523c-4c21-4ae0-bc18-9c391e7342b0] Running
	I0610 11:53:45.930843   57572 system_pods.go:74] duration metric: took 165.479385ms to wait for pod list to return data ...
	I0610 11:53:45.930855   57572 default_sa.go:34] waiting for default service account to be created ...
	I0610 11:53:46.127109   57572 default_sa.go:45] found service account: "default"
	I0610 11:53:46.127145   57572 default_sa.go:55] duration metric: took 196.279685ms for default service account to be created ...
	I0610 11:53:46.127154   57572 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 11:53:46.330560   57572 system_pods.go:86] 9 kube-system pods found
	I0610 11:53:46.330587   57572 system_pods.go:89] "coredns-7db6d8ff4d-9mqrm" [6269d670-dffa-4526-8117-0b44df04554a] Running
	I0610 11:53:46.330592   57572 system_pods.go:89] "coredns-7db6d8ff4d-f622z" [16cb4de3-afa9-4e45-bc85-e51273973808] Running
	I0610 11:53:46.330597   57572 system_pods.go:89] "etcd-no-preload-298179" [088f1950-04c4-49e0-b3e2-fe8b5f398a08] Running
	I0610 11:53:46.330601   57572 system_pods.go:89] "kube-apiserver-no-preload-298179" [11bad142-25ff-4aa9-9d9e-4b7cbb053bdd] Running
	I0610 11:53:46.330605   57572 system_pods.go:89] "kube-controller-manager-no-preload-298179" [ac29a4d9-6e9c-44fd-bb39-477255b94d0c] Running
	I0610 11:53:46.330608   57572 system_pods.go:89] "kube-proxy-fhndh" [50f848e7-44f6-4ab1-bf94-3189733abca2] Running
	I0610 11:53:46.330612   57572 system_pods.go:89] "kube-scheduler-no-preload-298179" [8569c375-b9bd-4a46-91ea-c6372056e45d] Running
	I0610 11:53:46.330619   57572 system_pods.go:89] "metrics-server-569cc877fc-jp7dr" [21136ae9-40d8-4857-aca5-47e3fa3b7e9c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:53:46.330623   57572 system_pods.go:89] "storage-provisioner" [783f523c-4c21-4ae0-bc18-9c391e7342b0] Running
	I0610 11:53:46.330631   57572 system_pods.go:126] duration metric: took 203.472984ms to wait for k8s-apps to be running ...
	I0610 11:53:46.330640   57572 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 11:53:46.330681   57572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:53:46.345084   57572 system_svc.go:56] duration metric: took 14.432966ms WaitForService to wait for kubelet
	I0610 11:53:46.345113   57572 kubeadm.go:576] duration metric: took 3.258932349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 11:53:46.345131   57572 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:53:46.528236   57572 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:53:46.528269   57572 node_conditions.go:123] node cpu capacity is 2
	I0610 11:53:46.528278   57572 node_conditions.go:105] duration metric: took 183.142711ms to run NodePressure ...
	I0610 11:53:46.528288   57572 start.go:240] waiting for startup goroutines ...
	I0610 11:53:46.528294   57572 start.go:245] waiting for cluster config update ...
	I0610 11:53:46.528303   57572 start.go:254] writing updated cluster config ...
	I0610 11:53:46.528561   57572 ssh_runner.go:195] Run: rm -f paused
	I0610 11:53:46.576348   57572 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 11:53:46.578434   57572 out.go:177] * Done! kubectl is now configured to use "no-preload-298179" cluster and "default" namespace by default
	I0610 11:53:49.894176   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:53:49.894368   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:53:49.573292   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:52.641233   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:53:58.721260   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:01.793270   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:07.873263   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:09.895012   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:09.895413   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:10.945237   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:17.025183   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:20.097196   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:26.177217   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:29.249267   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:35.329193   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:38.401234   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:44.481254   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:47.553200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:49.896623   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:54:49.896849   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:54:49.896868   57945 kubeadm.go:309] 
	I0610 11:54:49.896922   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:54:49.897030   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:54:49.897053   57945 kubeadm.go:309] 
	I0610 11:54:49.897121   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:54:49.897157   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:54:49.897308   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:54:49.897322   57945 kubeadm.go:309] 
	I0610 11:54:49.897493   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:54:49.897553   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:54:49.897612   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:54:49.897623   57945 kubeadm.go:309] 
	I0610 11:54:49.897755   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:54:49.897866   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:54:49.897876   57945 kubeadm.go:309] 
	I0610 11:54:49.898032   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:54:49.898139   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:54:49.898253   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:54:49.898357   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:54:49.898365   57945 kubeadm.go:309] 
	I0610 11:54:49.899094   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:54:49.899208   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:54:49.899302   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0610 11:54:49.899441   57945 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0610 11:54:49.899502   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 11:54:50.366528   57945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:54:50.380107   57945 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:54:50.390067   57945 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:54:50.390089   57945 kubeadm.go:156] found existing configuration files:
	
	I0610 11:54:50.390132   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0610 11:54:50.399159   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:54:50.399222   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:54:50.409346   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0610 11:54:50.420402   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:54:50.420458   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:54:50.432874   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.444351   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:54:50.444430   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:54:50.458175   57945 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0610 11:54:50.468538   57945 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:54:50.468611   57945 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:54:50.480033   57945 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 11:54:50.543600   57945 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0610 11:54:50.543653   57945 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 11:54:50.682810   57945 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 11:54:50.682970   57945 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 11:54:50.683117   57945 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 11:54:50.877761   57945 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 11:54:50.879686   57945 out.go:204]   - Generating certificates and keys ...
	I0610 11:54:50.879788   57945 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 11:54:50.879881   57945 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 11:54:50.880010   57945 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 11:54:50.880075   57945 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 11:54:50.880145   57945 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 11:54:50.880235   57945 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 11:54:50.880334   57945 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 11:54:50.880543   57945 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 11:54:50.880654   57945 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 11:54:50.880771   57945 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 11:54:50.880835   57945 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 11:54:50.880912   57945 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 11:54:51.326073   57945 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 11:54:51.537409   57945 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 11:54:51.721400   57945 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 11:54:51.884882   57945 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 11:54:51.904377   57945 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 11:54:51.906470   57945 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 11:54:51.906560   57945 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 11:54:52.065800   57945 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 11:54:52.067657   57945 out.go:204]   - Booting up control plane ...
	I0610 11:54:52.067807   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 11:54:52.069012   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 11:54:52.070508   57945 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 11:54:52.071669   57945 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 11:54:52.074772   57945 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 11:54:53.633176   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:54:56.705245   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:02.785227   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:05.857320   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:11.941172   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:15.009275   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:21.089235   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:24.161264   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:32.077145   57945 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0610 11:55:32.077542   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:32.077740   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:30.241187   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:33.313200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:37.078114   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:37.078357   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:39.393317   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:42.465223   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:47.078706   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:55:47.078906   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:55:48.545281   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:51.617229   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:55:57.697600   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:00.769294   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:07.079053   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:07.079285   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:06.849261   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:09.925249   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:16.001299   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:19.077309   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:25.153200   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:28.225172   60146 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0610 11:56:31.226848   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:56:31.226888   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:31.227225   60146 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281114"
	I0610 11:56:31.227250   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:31.227458   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:31.229187   60146 machine.go:97] duration metric: took 4m37.416418256s to provisionDockerMachine
	I0610 11:56:31.229224   60146 fix.go:56] duration metric: took 4m37.441343871s for fixHost
	I0610 11:56:31.229230   60146 start.go:83] releasing machines lock for "default-k8s-diff-port-281114", held for 4m37.44136358s
	W0610 11:56:31.229245   60146 start.go:713] error starting host: provision: host is not running
	W0610 11:56:31.229314   60146 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0610 11:56:31.229325   60146 start.go:728] Will try again in 5 seconds ...
	I0610 11:56:36.230954   60146 start.go:360] acquireMachinesLock for default-k8s-diff-port-281114: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 11:56:36.231068   60146 start.go:364] duration metric: took 60.465µs to acquireMachinesLock for "default-k8s-diff-port-281114"
	I0610 11:56:36.231091   60146 start.go:96] Skipping create...Using existing machine configuration
	I0610 11:56:36.231096   60146 fix.go:54] fixHost starting: 
	I0610 11:56:36.231372   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:56:36.231392   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:56:36.247286   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38849
	I0610 11:56:36.247715   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:56:36.248272   60146 main.go:141] libmachine: Using API Version  1
	I0610 11:56:36.248292   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:56:36.248585   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:56:36.248787   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:36.248939   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 11:56:36.250776   60146 fix.go:112] recreateIfNeeded on default-k8s-diff-port-281114: state=Stopped err=<nil>
	I0610 11:56:36.250796   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	W0610 11:56:36.250950   60146 fix.go:138] unexpected machine state, will restart: <nil>
	I0610 11:56:36.252942   60146 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-281114" ...
	I0610 11:56:36.254300   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Start
	I0610 11:56:36.254515   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring networks are active...
	I0610 11:56:36.255281   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring network default is active
	I0610 11:56:36.255626   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Ensuring network mk-default-k8s-diff-port-281114 is active
	I0610 11:56:36.256059   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Getting domain xml...
	I0610 11:56:36.256819   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Creating domain...
	I0610 11:56:37.521102   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting to get IP...
	I0610 11:56:37.522061   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.522494   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.522553   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:37.522473   61276 retry.go:31] will retry after 220.098219ms: waiting for machine to come up
	I0610 11:56:37.743932   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.744482   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:37.744513   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:37.744440   61276 retry.go:31] will retry after 292.471184ms: waiting for machine to come up
	I0610 11:56:38.038937   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.039497   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.039526   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:38.039454   61276 retry.go:31] will retry after 446.869846ms: waiting for machine to come up
	I0610 11:56:38.488091   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.488684   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:38.488708   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:38.488635   61276 retry.go:31] will retry after 607.787706ms: waiting for machine to come up
	I0610 11:56:39.098375   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.098845   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.098875   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:39.098795   61276 retry.go:31] will retry after 610.636143ms: waiting for machine to come up
	I0610 11:56:39.710692   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.711170   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:39.711198   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:39.711106   61276 retry.go:31] will retry after 598.132053ms: waiting for machine to come up
	I0610 11:56:40.310889   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:40.311397   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:40.311420   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:40.311328   61276 retry.go:31] will retry after 1.191704846s: waiting for machine to come up
	I0610 11:56:41.505131   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:41.505601   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:41.505631   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:41.505572   61276 retry.go:31] will retry after 937.081207ms: waiting for machine to come up
	I0610 11:56:42.444793   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:42.445368   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:42.445396   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:42.445338   61276 retry.go:31] will retry after 1.721662133s: waiting for machine to come up
	I0610 11:56:47.078993   57945 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0610 11:56:47.079439   57945 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0610 11:56:47.079463   57945 kubeadm.go:309] 
	I0610 11:56:47.079513   57945 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0610 11:56:47.079597   57945 kubeadm.go:309] 		timed out waiting for the condition
	I0610 11:56:47.079629   57945 kubeadm.go:309] 
	I0610 11:56:47.079678   57945 kubeadm.go:309] 	This error is likely caused by:
	I0610 11:56:47.079718   57945 kubeadm.go:309] 		- The kubelet is not running
	I0610 11:56:47.079865   57945 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0610 11:56:47.079876   57945 kubeadm.go:309] 
	I0610 11:56:47.080014   57945 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0610 11:56:47.080077   57945 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0610 11:56:47.080132   57945 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0610 11:56:47.080151   57945 kubeadm.go:309] 
	I0610 11:56:47.080280   57945 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0610 11:56:47.080377   57945 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0610 11:56:47.080389   57945 kubeadm.go:309] 
	I0610 11:56:47.080543   57945 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0610 11:56:47.080663   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0610 11:56:47.080769   57945 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0610 11:56:47.080862   57945 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0610 11:56:47.080874   57945 kubeadm.go:309] 
	I0610 11:56:47.081877   57945 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 11:56:47.082023   57945 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0610 11:56:47.082137   57945 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0610 11:56:47.082233   57945 kubeadm.go:393] duration metric: took 8m2.423366884s to StartCluster
	I0610 11:56:47.082273   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0610 11:56:47.082325   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0610 11:56:47.130548   57945 cri.go:89] found id: ""
	I0610 11:56:47.130585   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.130596   57945 logs.go:278] No container was found matching "kube-apiserver"
	I0610 11:56:47.130603   57945 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0610 11:56:47.130673   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0610 11:56:47.170087   57945 cri.go:89] found id: ""
	I0610 11:56:47.170124   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.170136   57945 logs.go:278] No container was found matching "etcd"
	I0610 11:56:47.170144   57945 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0610 11:56:47.170219   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0610 11:56:47.210394   57945 cri.go:89] found id: ""
	I0610 11:56:47.210430   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.210442   57945 logs.go:278] No container was found matching "coredns"
	I0610 11:56:47.210450   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0610 11:56:47.210532   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0610 11:56:47.246002   57945 cri.go:89] found id: ""
	I0610 11:56:47.246032   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.246043   57945 logs.go:278] No container was found matching "kube-scheduler"
	I0610 11:56:47.246051   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0610 11:56:47.246119   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0610 11:56:47.282333   57945 cri.go:89] found id: ""
	I0610 11:56:47.282361   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.282369   57945 logs.go:278] No container was found matching "kube-proxy"
	I0610 11:56:47.282375   57945 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0610 11:56:47.282432   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0610 11:56:47.316205   57945 cri.go:89] found id: ""
	I0610 11:56:47.316241   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.316254   57945 logs.go:278] No container was found matching "kube-controller-manager"
	I0610 11:56:47.316262   57945 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0610 11:56:47.316323   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0610 11:56:47.356012   57945 cri.go:89] found id: ""
	I0610 11:56:47.356047   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.356060   57945 logs.go:278] No container was found matching "kindnet"
	I0610 11:56:47.356069   57945 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0610 11:56:47.356140   57945 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0610 11:56:47.404624   57945 cri.go:89] found id: ""
	I0610 11:56:47.404655   57945 logs.go:276] 0 containers: []
	W0610 11:56:47.404666   57945 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0610 11:56:47.404678   57945 logs.go:123] Gathering logs for kubelet ...
	I0610 11:56:47.404694   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0610 11:56:47.475236   57945 logs.go:123] Gathering logs for dmesg ...
	I0610 11:56:47.475282   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0610 11:56:47.493382   57945 logs.go:123] Gathering logs for describe nodes ...
	I0610 11:56:47.493418   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0610 11:56:47.589894   57945 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0610 11:56:47.589918   57945 logs.go:123] Gathering logs for CRI-O ...
	I0610 11:56:47.589934   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0610 11:56:47.726080   57945 logs.go:123] Gathering logs for container status ...
	I0610 11:56:47.726123   57945 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0610 11:56:47.770399   57945 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0610 11:56:47.770451   57945 out.go:239] * 
	W0610 11:56:47.770532   57945 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.770558   57945 out.go:239] * 
	W0610 11:56:47.771459   57945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 11:56:47.775172   57945 out.go:177] 
	W0610 11:56:47.776444   57945 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0610 11:56:47.776509   57945 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0610 11:56:47.776545   57945 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0610 11:56:47.778306   57945 out.go:177] 
	I0610 11:56:44.168288   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:44.168809   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:44.168832   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:44.168762   61276 retry.go:31] will retry after 2.181806835s: waiting for machine to come up
	I0610 11:56:46.352210   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:46.352736   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:46.352764   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:46.352688   61276 retry.go:31] will retry after 2.388171324s: waiting for machine to come up
	I0610 11:56:48.744345   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:48.744853   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:48.744890   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:48.744815   61276 retry.go:31] will retry after 2.54250043s: waiting for machine to come up
	I0610 11:56:51.288816   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:51.289222   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | unable to find current IP address of domain default-k8s-diff-port-281114 in network mk-default-k8s-diff-port-281114
	I0610 11:56:51.289252   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | I0610 11:56:51.289190   61276 retry.go:31] will retry after 4.525493142s: waiting for machine to come up
	I0610 11:56:55.819862   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.820393   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Found IP for machine: 192.168.50.222
	I0610 11:56:55.820416   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Reserving static IP address...
	I0610 11:56:55.820433   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has current primary IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.820941   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-281114", mac: "52:54:00:23:06:35", ip: "192.168.50.222"} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.820984   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Reserved static IP address: 192.168.50.222
	I0610 11:56:55.821000   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | skip adding static IP to network mk-default-k8s-diff-port-281114 - found existing host DHCP lease matching {name: "default-k8s-diff-port-281114", mac: "52:54:00:23:06:35", ip: "192.168.50.222"}
	I0610 11:56:55.821012   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Getting to WaitForSSH function...
	I0610 11:56:55.821028   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Waiting for SSH to be available...
	I0610 11:56:55.823149   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.823499   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.823530   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.823680   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Using SSH client type: external
	I0610 11:56:55.823717   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa (-rw-------)
	I0610 11:56:55.823750   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.222 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 11:56:55.823764   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | About to run SSH command:
	I0610 11:56:55.823778   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | exit 0
	I0610 11:56:55.949264   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | SSH cmd err, output: <nil>: 
	I0610 11:56:55.949623   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetConfigRaw
	I0610 11:56:55.950371   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:55.953239   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.953602   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.953746   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.953874   60146 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/config.json ...
	I0610 11:56:55.954172   60146 machine.go:94] provisionDockerMachine start ...
	I0610 11:56:55.954203   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:55.954415   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:55.956837   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.957344   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:55.957361   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:55.957521   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:55.957710   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:55.957887   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:55.958055   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:55.958211   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:55.958425   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:55.958445   60146 main.go:141] libmachine: About to run SSH command:
	hostname
	I0610 11:56:56.061295   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0610 11:56:56.061331   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.061559   60146 buildroot.go:166] provisioning hostname "default-k8s-diff-port-281114"
	I0610 11:56:56.061588   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.061787   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.064578   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.064938   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.064975   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.065131   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.065383   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.065565   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.065681   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.065874   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.066079   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.066094   60146 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-281114 && echo "default-k8s-diff-port-281114" | sudo tee /etc/hostname
	I0610 11:56:56.183602   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-281114
	
	I0610 11:56:56.183626   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.186613   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.186986   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.187016   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.187260   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.187472   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.187656   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.187812   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.187993   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.188192   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.188220   60146 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-281114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-281114/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-281114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 11:56:56.298027   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 11:56:56.298057   60146 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 11:56:56.298076   60146 buildroot.go:174] setting up certificates
	I0610 11:56:56.298083   60146 provision.go:84] configureAuth start
	I0610 11:56:56.298094   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetMachineName
	I0610 11:56:56.298385   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:56.301219   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.301584   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.301614   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.301816   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.304010   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.304412   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.304438   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.304593   60146 provision.go:143] copyHostCerts
	I0610 11:56:56.304668   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 11:56:56.304681   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 11:56:56.304765   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 11:56:56.304874   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 11:56:56.304884   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 11:56:56.304924   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 11:56:56.305040   60146 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 11:56:56.305050   60146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 11:56:56.305084   60146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 11:56:56.305153   60146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-281114 san=[127.0.0.1 192.168.50.222 default-k8s-diff-port-281114 localhost minikube]
	I0610 11:56:56.411016   60146 provision.go:177] copyRemoteCerts
	I0610 11:56:56.411072   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 11:56:56.411093   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.413736   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.414075   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.414122   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.414292   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.414498   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.414686   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.414785   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:56.495039   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 11:56:56.519750   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 11:56:56.543202   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0610 11:56:56.566420   60146 provision.go:87] duration metric: took 268.326859ms to configureAuth
	I0610 11:56:56.566449   60146 buildroot.go:189] setting minikube options for container-runtime
	I0610 11:56:56.566653   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:56:56.566732   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.569742   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.570135   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.570159   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.570411   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.570635   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.570815   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.570969   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.571169   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.571334   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.571350   60146 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 11:56:56.846705   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 11:56:56.846727   60146 machine.go:97] duration metric: took 892.536744ms to provisionDockerMachine
	I0610 11:56:56.846741   60146 start.go:293] postStartSetup for "default-k8s-diff-port-281114" (driver="kvm2")
	I0610 11:56:56.846753   60146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 11:56:56.846795   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:56.847123   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 11:56:56.847150   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.849968   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.850300   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.850322   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.850518   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.850706   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.850889   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.851010   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:56.935027   60146 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 11:56:56.939465   60146 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 11:56:56.939489   60146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 11:56:56.939558   60146 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 11:56:56.939641   60146 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 11:56:56.939728   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 11:56:56.948993   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:56:56.974611   60146 start.go:296] duration metric: took 127.85527ms for postStartSetup
	I0610 11:56:56.974655   60146 fix.go:56] duration metric: took 20.74355824s for fixHost
	I0610 11:56:56.974673   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:56.978036   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.978438   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:56.978471   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:56.978612   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:56.978804   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.978984   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:56.979157   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:56.979343   60146 main.go:141] libmachine: Using SSH client type: native
	I0610 11:56:56.979506   60146 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.50.222 22 <nil> <nil>}
	I0610 11:56:56.979524   60146 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 11:56:57.081416   60146 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718020617.058533839
	
	I0610 11:56:57.081444   60146 fix.go:216] guest clock: 1718020617.058533839
	I0610 11:56:57.081454   60146 fix.go:229] Guest: 2024-06-10 11:56:57.058533839 +0000 UTC Remote: 2024-06-10 11:56:56.974658577 +0000 UTC m=+303.333936196 (delta=83.875262ms)
	I0610 11:56:57.081476   60146 fix.go:200] guest clock delta is within tolerance: 83.875262ms
	I0610 11:56:57.081482   60146 start.go:83] releasing machines lock for "default-k8s-diff-port-281114", held for 20.850403793s
	I0610 11:56:57.081499   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.081775   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:57.084904   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.085408   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.085442   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.085619   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086222   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086432   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 11:56:57.086519   60146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 11:56:57.086571   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:57.086660   60146 ssh_runner.go:195] Run: cat /version.json
	I0610 11:56:57.086694   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 11:56:57.089544   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.089869   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.089904   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.089931   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.090091   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:57.090259   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:57.090362   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:57.090388   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:57.090444   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:57.090539   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 11:56:57.090613   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:57.090667   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 11:56:57.090806   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 11:56:57.090969   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 11:56:57.215361   60146 ssh_runner.go:195] Run: systemctl --version
	I0610 11:56:57.221479   60146 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 11:56:57.363318   60146 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 11:56:57.369389   60146 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 11:56:57.369465   60146 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 11:56:57.385195   60146 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 11:56:57.385217   60146 start.go:494] detecting cgroup driver to use...
	I0610 11:56:57.385284   60146 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 11:56:57.404923   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 11:56:57.420158   60146 docker.go:217] disabling cri-docker service (if available) ...
	I0610 11:56:57.420204   60146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 11:56:57.434385   60146 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 11:56:57.448340   60146 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 11:56:57.574978   60146 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 11:56:57.714523   60146 docker.go:233] disabling docker service ...
	I0610 11:56:57.714620   60146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 11:56:57.729914   60146 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 11:56:57.742557   60146 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 11:56:57.885770   60146 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 11:56:58.018120   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 11:56:58.031606   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 11:56:58.049312   60146 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 11:56:58.049389   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.059800   60146 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 11:56:58.059877   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.071774   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.082332   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.093474   60146 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 11:56:58.104231   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.114328   60146 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.131812   60146 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 11:56:58.142612   60146 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 11:56:58.152681   60146 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 11:56:58.152750   60146 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 11:56:58.166120   60146 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 11:56:58.176281   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:56:58.306558   60146 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 11:56:58.446379   60146 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 11:56:58.446460   60146 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 11:56:58.452523   60146 start.go:562] Will wait 60s for crictl version
	I0610 11:56:58.452619   60146 ssh_runner.go:195] Run: which crictl
	I0610 11:56:58.456611   60146 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 11:56:58.503496   60146 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 11:56:58.503581   60146 ssh_runner.go:195] Run: crio --version
	I0610 11:56:58.532834   60146 ssh_runner.go:195] Run: crio --version
	I0610 11:56:58.562697   60146 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 11:56:58.563974   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetIP
	I0610 11:56:58.566760   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:58.567107   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 11:56:58.567142   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 11:56:58.567408   60146 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0610 11:56:58.571671   60146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:56:58.584423   60146 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 11:56:58.584535   60146 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 11:56:58.584588   60146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:56:58.622788   60146 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0610 11:56:58.622862   60146 ssh_runner.go:195] Run: which lz4
	I0610 11:56:58.627561   60146 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 11:56:58.632560   60146 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 11:56:58.632595   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0610 11:56:59.943375   60146 crio.go:462] duration metric: took 1.315853744s to copy over tarball
	I0610 11:56:59.943444   60146 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 11:57:02.167265   60146 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.223791523s)
	I0610 11:57:02.167299   60146 crio.go:469] duration metric: took 2.223894548s to extract the tarball
	I0610 11:57:02.167308   60146 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 11:57:02.206288   60146 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 11:57:02.250013   60146 crio.go:514] all images are preloaded for cri-o runtime.
	I0610 11:57:02.250034   60146 cache_images.go:84] Images are preloaded, skipping loading
	I0610 11:57:02.250041   60146 kubeadm.go:928] updating node { 192.168.50.222 8444 v1.30.1 crio true true} ...
	I0610 11:57:02.250163   60146 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-281114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.222
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0610 11:57:02.250261   60146 ssh_runner.go:195] Run: crio config
	I0610 11:57:02.305797   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:57:02.305822   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:57:02.305838   60146 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0610 11:57:02.305867   60146 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.222 APIServerPort:8444 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-281114 NodeName:default-k8s-diff-port-281114 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.222"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.222 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 11:57:02.306030   60146 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.222
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-281114"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.222
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.222"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 11:57:02.306111   60146 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0610 11:57:02.316522   60146 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 11:57:02.316585   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 11:57:02.326138   60146 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0610 11:57:02.342685   60146 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 11:57:02.359693   60146 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0610 11:57:02.375771   60146 ssh_runner.go:195] Run: grep 192.168.50.222	control-plane.minikube.internal$ /etc/hosts
	I0610 11:57:02.379280   60146 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.222	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 11:57:02.390797   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 11:57:02.511286   60146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 11:57:02.529051   60146 certs.go:68] Setting up /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114 for IP: 192.168.50.222
	I0610 11:57:02.529076   60146 certs.go:194] generating shared ca certs ...
	I0610 11:57:02.529095   60146 certs.go:226] acquiring lock for ca certs: {Name:mke8b68fecbd8b649419d142cdde25446085a9ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 11:57:02.529281   60146 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key
	I0610 11:57:02.529358   60146 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key
	I0610 11:57:02.529373   60146 certs.go:256] generating profile certs ...
	I0610 11:57:02.529492   60146 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/client.key
	I0610 11:57:02.529576   60146 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.key.d35a2a33
	I0610 11:57:02.529626   60146 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.key
	I0610 11:57:02.529769   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem (1338 bytes)
	W0610 11:57:02.529810   60146 certs.go:480] ignoring /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758_empty.pem, impossibly tiny 0 bytes
	I0610 11:57:02.529823   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem (1679 bytes)
	I0610 11:57:02.529857   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem (1082 bytes)
	I0610 11:57:02.529893   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem (1123 bytes)
	I0610 11:57:02.529924   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem (1675 bytes)
	I0610 11:57:02.529981   60146 certs.go:484] found cert: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem (1708 bytes)
	I0610 11:57:02.531166   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 11:57:02.570183   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0610 11:57:02.607339   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 11:57:02.653464   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0610 11:57:02.694329   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0610 11:57:02.722420   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 11:57:02.747321   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 11:57:02.772755   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/default-k8s-diff-port-281114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 11:57:02.797241   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 11:57:02.821892   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/10758.pem --> /usr/share/ca-certificates/10758.pem (1338 bytes)
	I0610 11:57:02.846925   60146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /usr/share/ca-certificates/107582.pem (1708 bytes)
	I0610 11:57:02.870986   60146 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 11:57:02.889088   60146 ssh_runner.go:195] Run: openssl version
	I0610 11:57:02.894820   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10758.pem && ln -fs /usr/share/ca-certificates/10758.pem /etc/ssl/certs/10758.pem"
	I0610 11:57:02.906689   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.911048   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jun 10 10:34 /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.911095   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10758.pem
	I0610 11:57:02.916866   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10758.pem /etc/ssl/certs/51391683.0"
	I0610 11:57:02.928405   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107582.pem && ln -fs /usr/share/ca-certificates/107582.pem /etc/ssl/certs/107582.pem"
	I0610 11:57:02.941254   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.945849   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jun 10 10:34 /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.945899   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107582.pem
	I0610 11:57:02.951833   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/107582.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 11:57:02.963661   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 11:57:02.975117   60146 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.979667   60146 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 10 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.979731   60146 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 11:57:02.985212   60146 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 11:57:02.997007   60146 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0610 11:57:03.001498   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 11:57:03.007549   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 11:57:03.013717   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 11:57:03.019947   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 11:57:03.025890   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 11:57:03.031443   60146 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 11:57:03.036936   60146 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-281114 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:default-k8s-diff-port-281114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 11:57:03.037056   60146 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0610 11:57:03.037111   60146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:57:03.088497   60146 cri.go:89] found id: ""
	I0610 11:57:03.088555   60146 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0610 11:57:03.099358   60146 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0610 11:57:03.099381   60146 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0610 11:57:03.099386   60146 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0610 11:57:03.099436   60146 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 11:57:03.109092   60146 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 11:57:03.110113   60146 kubeconfig.go:125] found "default-k8s-diff-port-281114" server: "https://192.168.50.222:8444"
	I0610 11:57:03.112565   60146 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 11:57:03.122338   60146 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.222
	I0610 11:57:03.122370   60146 kubeadm.go:1154] stopping kube-system containers ...
	I0610 11:57:03.122392   60146 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0610 11:57:03.122453   60146 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0610 11:57:03.159369   60146 cri.go:89] found id: ""
	I0610 11:57:03.159470   60146 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 11:57:03.176704   60146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 11:57:03.186957   60146 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 11:57:03.186977   60146 kubeadm.go:156] found existing configuration files:
	
	I0610 11:57:03.187040   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0610 11:57:03.196318   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 11:57:03.196397   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 11:57:03.205630   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0610 11:57:03.214480   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 11:57:03.214538   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 11:57:03.223939   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0610 11:57:03.232372   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 11:57:03.232422   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 11:57:03.241846   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0610 11:57:03.251014   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 11:57:03.251092   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 11:57:03.260115   60146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 11:57:03.269792   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:03.388582   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.274314   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.473968   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.531884   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:04.618371   60146 api_server.go:52] waiting for apiserver process to appear ...
	I0610 11:57:04.618464   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:05.118733   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:05.619107   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:06.118937   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:57:06.138176   60146 api_server.go:72] duration metric: took 1.519803379s to wait for apiserver process to appear ...
	I0610 11:57:06.138205   60146 api_server.go:88] waiting for apiserver healthz status ...
	I0610 11:57:06.138223   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.201655   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 11:57:09.201680   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 11:57:09.201691   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.305898   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:09.305934   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:09.639319   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:09.644006   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:09.644041   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:10.138712   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:10.144989   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:10.145024   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:10.638505   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:10.642825   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:10.642861   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:11.138360   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:11.143062   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:11.143087   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:11.639058   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:11.643394   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:11.643419   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:12.139125   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:12.143425   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0610 11:57:12.143452   60146 api_server.go:103] status: https://192.168.50.222:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0610 11:57:12.639074   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 11:57:12.644121   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 200:
	ok
	I0610 11:57:12.650538   60146 api_server.go:141] control plane version: v1.30.1
	I0610 11:57:12.650570   60146 api_server.go:131] duration metric: took 6.512357672s to wait for apiserver health ...
	I0610 11:57:12.650581   60146 cni.go:84] Creating CNI manager for ""
	I0610 11:57:12.650590   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 11:57:12.652548   60146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 11:57:12.653918   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 11:57:12.664536   60146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 11:57:12.685230   60146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 11:57:12.694511   60146 system_pods.go:59] 8 kube-system pods found
	I0610 11:57:12.694546   60146 system_pods.go:61] "coredns-7db6d8ff4d-5ngxc" [26f3438c-a6a2-43d5-b79d-991752b4cc10] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 11:57:12.694561   60146 system_pods.go:61] "etcd-default-k8s-diff-port-281114" [e8a3dc04-a9f0-4670-8256-7a0a617958ba] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 11:57:12.694610   60146 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281114" [45080cf7-94ee-4c55-a3b4-cfa8d3b4edbe] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 11:57:12.694626   60146 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281114" [3f51cb0c-bb90-4847-acd4-0ed8a58608ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 11:57:12.694633   60146 system_pods.go:61] "kube-proxy-896ts" [13b994b7-8d0e-4e3d-9902-3bdd7a9ab949] Running
	I0610 11:57:12.694648   60146 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281114" [c205a8b5-e970-40ed-83d7-462781bcf41f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 11:57:12.694659   60146 system_pods.go:61] "metrics-server-569cc877fc-jhv6f" [60a2e6ad-714a-4c6d-b586-232d130397a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 11:57:12.694665   60146 system_pods.go:61] "storage-provisioner" [b54a4493-2c6d-4a5e-b74c-ba9863979688] Running
	I0610 11:57:12.694675   60146 system_pods.go:74] duration metric: took 9.424371ms to wait for pod list to return data ...
	I0610 11:57:12.694687   60146 node_conditions.go:102] verifying NodePressure condition ...
	I0610 11:57:12.697547   60146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 11:57:12.697571   60146 node_conditions.go:123] node cpu capacity is 2
	I0610 11:57:12.697583   60146 node_conditions.go:105] duration metric: took 2.887217ms to run NodePressure ...
	I0610 11:57:12.697633   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 11:57:12.966838   60146 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0610 11:57:12.971616   60146 kubeadm.go:733] kubelet initialised
	I0610 11:57:12.971641   60146 kubeadm.go:734] duration metric: took 4.781436ms waiting for restarted kubelet to initialise ...
	I0610 11:57:12.971649   60146 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 11:57:12.977162   60146 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:14.984174   60146 pod_ready.go:102] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:16.984365   60146 pod_ready.go:102] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:18.985423   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.985447   60146 pod_ready.go:81] duration metric: took 6.008259879s for pod "coredns-7db6d8ff4d-5ngxc" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.985459   60146 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.992228   60146 pod_ready.go:92] pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.992249   60146 pod_ready.go:81] duration metric: took 6.782049ms for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.992261   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.998328   60146 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:18.998354   60146 pod_ready.go:81] duration metric: took 6.080448ms for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:18.998363   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:21.004441   60146 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:23.005035   60146 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:23.505290   60146 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.505316   60146 pod_ready.go:81] duration metric: took 4.506946099s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.505326   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-896ts" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.510714   60146 pod_ready.go:92] pod "kube-proxy-896ts" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.510733   60146 pod_ready.go:81] duration metric: took 5.402289ms for pod "kube-proxy-896ts" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.510741   60146 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.515120   60146 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 11:57:23.515138   60146 pod_ready.go:81] duration metric: took 4.391539ms for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:23.515145   60146 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" ...
	I0610 11:57:25.522456   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:28.021723   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:30.521428   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:32.521868   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:35.020800   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:37.021406   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:39.022230   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:41.026828   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:43.521675   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:46.021385   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:48.521085   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:50.521489   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:53.020867   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:55.021644   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:57.521383   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:57:59.521662   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:02.021864   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:04.521572   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:07.021580   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:09.521128   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:11.522117   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:14.021270   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:16.022304   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:18.521534   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:21.021061   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:23.021721   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:25.521779   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:28.021005   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:30.023892   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:32.521068   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:35.022247   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:37.022812   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:39.521194   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:41.521813   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:43.521847   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:46.021646   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:48.521791   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:51.020662   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:53.020752   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:55.021736   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:58:57.521819   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:00.021201   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:02.521497   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:05.021115   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:07.521673   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:10.022328   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:12.521244   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:15.020407   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:17.021142   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:19.021398   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:21.021949   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:23.022714   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:25.521324   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:27.523011   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:30.021380   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:32.021456   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:34.021713   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:36.523229   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:39.023269   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:41.521241   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:43.522882   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:46.021368   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:48.021781   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:50.022979   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:52.522357   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:55.022181   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 11:59:57.521630   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:00.022732   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:02.524425   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:05.021218   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:07.021736   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:09.521121   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:12.022455   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:14.023274   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:16.521626   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:19.021624   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:21.021728   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:23.022457   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:25.023406   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:27.523393   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:30.022146   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:32.520816   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:34.522050   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:36.522345   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:39.021544   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:41.022726   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:43.520941   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:45.521181   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:47.522257   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:49.522829   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:51.523346   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:54.020982   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:56.021367   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:00:58.021467   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:00.021643   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:02.021791   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:04.021864   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:06.021968   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:08.521556   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:10.521588   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:12.521870   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:15.025925   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:17.523018   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:20.022903   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:22.521723   60146 pod_ready.go:102] pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace has status "Ready":"False"
	I0610 12:01:23.515523   60146 pod_ready.go:81] duration metric: took 4m0.000361045s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" ...
	E0610 12:01:23.515558   60146 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-jhv6f" in "kube-system" namespace to be "Ready" (will not retry!)
	I0610 12:01:23.515582   60146 pod_ready.go:38] duration metric: took 4m10.543923644s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:01:23.515614   60146 kubeadm.go:591] duration metric: took 4m20.4162222s to restartPrimaryControlPlane
	W0610 12:01:23.515715   60146 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0610 12:01:23.515751   60146 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0610 12:01:54.687867   60146 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.172093979s)
	I0610 12:01:54.687931   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:01:54.704702   60146 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 12:01:54.714940   60146 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 12:01:54.724675   60146 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 12:01:54.724702   60146 kubeadm.go:156] found existing configuration files:
	
	I0610 12:01:54.724749   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0610 12:01:54.734652   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0610 12:01:54.734726   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0610 12:01:54.744642   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0610 12:01:54.755297   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0610 12:01:54.755375   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0610 12:01:54.765800   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0610 12:01:54.775568   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0610 12:01:54.775636   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 12:01:54.785076   60146 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0610 12:01:54.793645   60146 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0610 12:01:54.793706   60146 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 12:01:54.803137   60146 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 12:01:54.855022   60146 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 12:01:54.855094   60146 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 12:01:54.995399   60146 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 12:01:54.995511   60146 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 12:01:54.995622   60146 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 12:01:55.194136   60146 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:01:55.196296   60146 out.go:204]   - Generating certificates and keys ...
	I0610 12:01:55.196396   60146 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 12:01:55.196475   60146 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 12:01:55.196575   60146 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0610 12:01:55.196680   60146 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0610 12:01:55.196792   60146 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0610 12:01:55.196874   60146 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0610 12:01:55.196984   60146 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0610 12:01:55.197077   60146 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0610 12:01:55.197158   60146 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0610 12:01:55.197231   60146 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0610 12:01:55.197265   60146 kubeadm.go:309] [certs] Using the existing "sa" key
	I0610 12:01:55.197320   60146 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:01:55.299197   60146 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:01:55.490367   60146 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:01:55.751377   60146 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:01:55.863144   60146 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:01:56.112395   60146 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:01:56.113059   60146 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:01:56.118410   60146 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:01:56.120277   60146 out.go:204]   - Booting up control plane ...
	I0610 12:01:56.120416   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:01:56.120503   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:01:56.120565   60146 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:01:56.138057   60146 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:01:56.138509   60146 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:01:56.138563   60146 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 12:01:56.263559   60146 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 12:01:56.263688   60146 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:01:57.264829   60146 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001316355s
	I0610 12:01:57.264927   60146 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 12:02:02.267632   60146 kubeadm.go:309] [api-check] The API server is healthy after 5.001644567s
	I0610 12:02:02.282693   60146 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 12:02:02.305741   60146 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 12:02:02.341283   60146 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 12:02:02.341527   60146 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-281114 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 12:02:02.355256   60146 kubeadm.go:309] [bootstrap-token] Using token: mkpvnr.wlx5xvctjlg8pi72
	I0610 12:02:02.356920   60146 out.go:204]   - Configuring RBAC rules ...
	I0610 12:02:02.357052   60146 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 12:02:02.367773   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 12:02:02.376921   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 12:02:02.386582   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 12:02:02.390887   60146 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 12:02:02.399245   60146 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 12:02:02.674008   60146 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 12:02:03.137504   60146 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 12:02:03.673560   60146 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 12:02:03.674588   60146 kubeadm.go:309] 
	I0610 12:02:03.674677   60146 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 12:02:03.674694   60146 kubeadm.go:309] 
	I0610 12:02:03.674774   60146 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 12:02:03.674784   60146 kubeadm.go:309] 
	I0610 12:02:03.674813   60146 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 12:02:03.674924   60146 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 12:02:03.675014   60146 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 12:02:03.675026   60146 kubeadm.go:309] 
	I0610 12:02:03.675128   60146 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 12:02:03.675150   60146 kubeadm.go:309] 
	I0610 12:02:03.675225   60146 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 12:02:03.675234   60146 kubeadm.go:309] 
	I0610 12:02:03.675344   60146 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 12:02:03.675460   60146 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 12:02:03.675587   60146 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 12:02:03.677879   60146 kubeadm.go:309] 
	I0610 12:02:03.677961   60146 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 12:02:03.678057   60146 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 12:02:03.678068   60146 kubeadm.go:309] 
	I0610 12:02:03.678160   60146 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token mkpvnr.wlx5xvctjlg8pi72 \
	I0610 12:02:03.678304   60146 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 12:02:03.678338   60146 kubeadm.go:309] 	--control-plane 
	I0610 12:02:03.678348   60146 kubeadm.go:309] 
	I0610 12:02:03.678446   60146 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 12:02:03.678460   60146 kubeadm.go:309] 
	I0610 12:02:03.678580   60146 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token mkpvnr.wlx5xvctjlg8pi72 \
	I0610 12:02:03.678726   60146 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 12:02:03.678869   60146 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 12:02:03.678886   60146 cni.go:84] Creating CNI manager for ""
	I0610 12:02:03.678896   60146 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 12:02:03.681019   60146 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 12:02:03.682415   60146 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 12:02:03.693028   60146 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0610 12:02:03.711436   60146 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 12:02:03.711534   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:03.711611   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-281114 minikube.k8s.io/updated_at=2024_06_10T12_02_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=default-k8s-diff-port-281114 minikube.k8s.io/primary=true
	I0610 12:02:03.888463   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:03.926946   60146 ops.go:34] apiserver oom_adj: -16
	I0610 12:02:04.389105   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:04.888545   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:05.389096   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:05.888853   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:06.389522   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:06.889491   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:07.389417   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:07.889485   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:08.388869   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:08.889480   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:09.389130   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:09.889052   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:10.389053   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:10.889177   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:11.388985   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:11.889405   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:12.388805   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:12.889139   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:13.389072   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:13.888843   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:14.389349   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:14.888798   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:15.388800   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:15.888491   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:16.389394   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:16.889175   60146 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:02:17.007766   60146 kubeadm.go:1107] duration metric: took 13.296278569s to wait for elevateKubeSystemPrivileges
	W0610 12:02:17.007804   60146 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 12:02:17.007813   60146 kubeadm.go:393] duration metric: took 5m13.970894294s to StartCluster
	I0610 12:02:17.007835   60146 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:02:17.007914   60146 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 12:02:17.009456   60146 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:02:17.009751   60146 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.222 Port:8444 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 12:02:17.011669   60146 out.go:177] * Verifying Kubernetes components...
	I0610 12:02:17.009833   60146 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 12:02:17.011705   60146 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013481   60146 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.013496   60146 addons.go:243] addon storage-provisioner should already be in state true
	I0610 12:02:17.013539   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.011715   60146 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013612   60146 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.013628   60146 addons.go:243] addon metrics-server should already be in state true
	I0610 12:02:17.013669   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.009996   60146 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:02:17.011717   60146 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-281114"
	I0610 12:02:17.013437   60146 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:02:17.013792   60146 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-281114"
	I0610 12:02:17.013961   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014009   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.014043   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014066   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.014174   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.014211   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.030604   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43907
	I0610 12:02:17.031126   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.031701   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.031729   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.032073   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.032272   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.034510   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42827
	I0610 12:02:17.034557   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42127
	I0610 12:02:17.034950   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.035130   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.035437   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.035459   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.035888   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.035968   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.035986   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.036820   60146 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-281114"
	W0610 12:02:17.036839   60146 addons.go:243] addon default-storageclass should already be in state true
	I0610 12:02:17.036865   60146 host.go:66] Checking if "default-k8s-diff-port-281114" exists ...
	I0610 12:02:17.037323   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.037345   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.038068   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.038408   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.038428   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.039402   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.039436   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.052901   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I0610 12:02:17.053390   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.053936   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.053959   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.054226   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38515
	I0610 12:02:17.054303   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.054569   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.054905   60146 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:02:17.054933   60146 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:02:17.055019   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.055040   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.055448   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.055637   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.057623   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.059785   60146 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:02:17.058684   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0610 12:02:17.060310   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.061277   60146 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:02:17.061292   60146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 12:02:17.061311   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.061738   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.061762   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.062097   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.062405   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.064169   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.065635   60146 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0610 12:02:17.065251   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.066901   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0610 12:02:17.065677   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.066921   60146 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0610 12:02:17.066945   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.066952   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.065921   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.067144   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.067267   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.067437   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.070722   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.071110   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.071125   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.071422   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.071582   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.071714   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.072048   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.073784   60146 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46447
	I0610 12:02:17.074157   60146 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:02:17.074645   60146 main.go:141] libmachine: Using API Version  1
	I0610 12:02:17.074659   60146 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:02:17.074986   60146 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:02:17.075129   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetState
	I0610 12:02:17.076879   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .DriverName
	I0610 12:02:17.077138   60146 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 12:02:17.077153   60146 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 12:02:17.077170   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHHostname
	I0610 12:02:17.080253   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.080667   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:06:35", ip: ""} in network mk-default-k8s-diff-port-281114: {Iface:virbr1 ExpiryTime:2024-06-10 12:56:46 +0000 UTC Type:0 Mac:52:54:00:23:06:35 Iaid: IPaddr:192.168.50.222 Prefix:24 Hostname:default-k8s-diff-port-281114 Clientid:01:52:54:00:23:06:35}
	I0610 12:02:17.080698   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | domain default-k8s-diff-port-281114 has defined IP address 192.168.50.222 and MAC address 52:54:00:23:06:35 in network mk-default-k8s-diff-port-281114
	I0610 12:02:17.080862   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHPort
	I0610 12:02:17.081088   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHKeyPath
	I0610 12:02:17.081280   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .GetSSHUsername
	I0610 12:02:17.081466   60146 sshutil.go:53] new ssh client: &{IP:192.168.50.222 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/default-k8s-diff-port-281114/id_rsa Username:docker}
	I0610 12:02:17.226805   60146 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:02:17.257188   60146 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-281114" to be "Ready" ...
	I0610 12:02:17.266803   60146 node_ready.go:49] node "default-k8s-diff-port-281114" has status "Ready":"True"
	I0610 12:02:17.266829   60146 node_ready.go:38] duration metric: took 9.610473ms for node "default-k8s-diff-port-281114" to be "Ready" ...
	I0610 12:02:17.266840   60146 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:02:17.273132   60146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:17.327416   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0610 12:02:17.327442   60146 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0610 12:02:17.366670   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:02:17.367685   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 12:02:17.378833   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0610 12:02:17.378858   60146 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0610 12:02:17.436533   60146 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 12:02:17.436558   60146 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0610 12:02:17.490426   60146 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0610 12:02:18.279491   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.279516   60146 pod_ready.go:81] duration metric: took 1.006353706s for pod "coredns-7db6d8ff4d-5fgtk" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.279527   60146 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.286003   60146 pod_ready.go:92] pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.286024   60146 pod_ready.go:81] duration metric: took 6.488693ms for pod "coredns-7db6d8ff4d-fg8xx" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.286036   60146 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.295995   60146 pod_ready.go:92] pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.296015   60146 pod_ready.go:81] duration metric: took 9.973573ms for pod "etcd-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.296024   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.302383   60146 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.302407   60146 pod_ready.go:81] duration metric: took 6.376673ms for pod "kube-apiserver-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.302418   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.421208   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.054498973s)
	I0610 12:02:18.421244   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.053533062s)
	I0610 12:02:18.421270   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421278   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421285   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421290   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421645   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.421691   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.421706   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.421715   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421717   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.421723   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.421726   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.421734   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.421743   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.422083   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.422103   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.422122   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.422123   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.422132   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.453377   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.453408   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.453803   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.453806   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.453831   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.475839   60146 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:18.475867   60146 pod_ready.go:81] duration metric: took 173.440125ms for pod "kube-controller-manager-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.475881   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wh756" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:18.673586   60146 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.183120727s)
	I0610 12:02:18.673646   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.673662   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.673961   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.674001   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.674010   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.674020   60146 main.go:141] libmachine: Making call to close driver server
	I0610 12:02:18.674045   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) Calling .Close
	I0610 12:02:18.674315   60146 main.go:141] libmachine: (default-k8s-diff-port-281114) DBG | Closing plugin on server side
	I0610 12:02:18.674356   60146 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:02:18.674365   60146 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:02:18.674376   60146 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-281114"
	I0610 12:02:18.676402   60146 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0610 12:02:18.677734   60146 addons.go:510] duration metric: took 1.667897142s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0610 12:02:19.660297   60146 pod_ready.go:92] pod "kube-proxy-wh756" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:19.660327   60146 pod_ready.go:81] duration metric: took 1.184438894s for pod "kube-proxy-wh756" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:19.660340   60146 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:20.060583   60146 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace has status "Ready":"True"
	I0610 12:02:20.060607   60146 pod_ready.go:81] duration metric: took 400.25949ms for pod "kube-scheduler-default-k8s-diff-port-281114" in "kube-system" namespace to be "Ready" ...
	I0610 12:02:20.060616   60146 pod_ready.go:38] duration metric: took 2.793765456s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:02:20.060634   60146 api_server.go:52] waiting for apiserver process to appear ...
	I0610 12:02:20.060693   60146 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 12:02:20.076416   60146 api_server.go:72] duration metric: took 3.066630137s to wait for apiserver process to appear ...
	I0610 12:02:20.076441   60146 api_server.go:88] waiting for apiserver healthz status ...
	I0610 12:02:20.076462   60146 api_server.go:253] Checking apiserver healthz at https://192.168.50.222:8444/healthz ...
	I0610 12:02:20.081614   60146 api_server.go:279] https://192.168.50.222:8444/healthz returned 200:
	ok
	I0610 12:02:20.082567   60146 api_server.go:141] control plane version: v1.30.1
	I0610 12:02:20.082589   60146 api_server.go:131] duration metric: took 6.142085ms to wait for apiserver health ...
	I0610 12:02:20.082597   60146 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 12:02:20.263766   60146 system_pods.go:59] 9 kube-system pods found
	I0610 12:02:20.263803   60146 system_pods.go:61] "coredns-7db6d8ff4d-5fgtk" [03d948ca-122a-4042-8371-8a9422c187bc] Running
	I0610 12:02:20.263808   60146 system_pods.go:61] "coredns-7db6d8ff4d-fg8xx" [e91ae09c-8821-4843-8c0d-ea734433c213] Running
	I0610 12:02:20.263815   60146 system_pods.go:61] "etcd-default-k8s-diff-port-281114" [110985f7-c57e-453d-8bda-c5104d879eb4] Running
	I0610 12:02:20.263821   60146 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-281114" [e62181ca-648e-4d5f-b2a7-00bed06f3bd2] Running
	I0610 12:02:20.263827   60146 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-281114" [109f02bd-8c9c-40f6-98e8-5cf2b6d97deb] Running
	I0610 12:02:20.263832   60146 system_pods.go:61] "kube-proxy-wh756" [57cbf3d6-c149-4ae1-84d3-6df6a53ea091] Running
	I0610 12:02:20.263838   60146 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-281114" [00889b82-f4fc-4a98-86cd-ab1028dc4461] Running
	I0610 12:02:20.263848   60146 system_pods.go:61] "metrics-server-569cc877fc-j58s9" [f1c91612-b967-447e-bc71-13ba0d11864b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 12:02:20.263854   60146 system_pods.go:61] "storage-provisioner" [8df0a38c-5e91-4b10-a303-c4eff9545669] Running
	I0610 12:02:20.263866   60146 system_pods.go:74] duration metric: took 181.261717ms to wait for pod list to return data ...
	I0610 12:02:20.263878   60146 default_sa.go:34] waiting for default service account to be created ...
	I0610 12:02:20.460812   60146 default_sa.go:45] found service account: "default"
	I0610 12:02:20.460848   60146 default_sa.go:55] duration metric: took 196.961501ms for default service account to be created ...
	I0610 12:02:20.460860   60146 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 12:02:20.664565   60146 system_pods.go:86] 9 kube-system pods found
	I0610 12:02:20.664591   60146 system_pods.go:89] "coredns-7db6d8ff4d-5fgtk" [03d948ca-122a-4042-8371-8a9422c187bc] Running
	I0610 12:02:20.664596   60146 system_pods.go:89] "coredns-7db6d8ff4d-fg8xx" [e91ae09c-8821-4843-8c0d-ea734433c213] Running
	I0610 12:02:20.664601   60146 system_pods.go:89] "etcd-default-k8s-diff-port-281114" [110985f7-c57e-453d-8bda-c5104d879eb4] Running
	I0610 12:02:20.664606   60146 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-281114" [e62181ca-648e-4d5f-b2a7-00bed06f3bd2] Running
	I0610 12:02:20.664610   60146 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-281114" [109f02bd-8c9c-40f6-98e8-5cf2b6d97deb] Running
	I0610 12:02:20.664614   60146 system_pods.go:89] "kube-proxy-wh756" [57cbf3d6-c149-4ae1-84d3-6df6a53ea091] Running
	I0610 12:02:20.664618   60146 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-281114" [00889b82-f4fc-4a98-86cd-ab1028dc4461] Running
	I0610 12:02:20.664626   60146 system_pods.go:89] "metrics-server-569cc877fc-j58s9" [f1c91612-b967-447e-bc71-13ba0d11864b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0610 12:02:20.664631   60146 system_pods.go:89] "storage-provisioner" [8df0a38c-5e91-4b10-a303-c4eff9545669] Running
	I0610 12:02:20.664640   60146 system_pods.go:126] duration metric: took 203.773693ms to wait for k8s-apps to be running ...
	I0610 12:02:20.664649   60146 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 12:02:20.664690   60146 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 12:02:20.681388   60146 system_svc.go:56] duration metric: took 16.731528ms WaitForService to wait for kubelet
	I0610 12:02:20.681411   60146 kubeadm.go:576] duration metric: took 3.671630148s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:02:20.681432   60146 node_conditions.go:102] verifying NodePressure condition ...
	I0610 12:02:20.861346   60146 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0610 12:02:20.861369   60146 node_conditions.go:123] node cpu capacity is 2
	I0610 12:02:20.861379   60146 node_conditions.go:105] duration metric: took 179.94199ms to run NodePressure ...
	I0610 12:02:20.861390   60146 start.go:240] waiting for startup goroutines ...
	I0610 12:02:20.861396   60146 start.go:245] waiting for cluster config update ...
	I0610 12:02:20.861405   60146 start.go:254] writing updated cluster config ...
	I0610 12:02:20.861658   60146 ssh_runner.go:195] Run: rm -f paused
	I0610 12:02:20.911134   60146 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0610 12:02:20.913129   60146 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-281114" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jun 10 12:08:57 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:57.990063972Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021337990039992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f7cbeaa-917f-4651-acdd-977bf8b9cdfa name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:08:57 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:57.990506332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3698cb3-bf74-4727-9fef-aea46d5ff090 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:08:57 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:57.990579733Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3698cb3-bf74-4727-9fef-aea46d5ff090 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:08:57 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:57.990617314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c3698cb3-bf74-4727-9fef-aea46d5ff090 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.030154358Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=172bf96b-2478-41fe-8adb-75c7629e172f name=/runtime.v1.RuntimeService/Version
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.030236287Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=172bf96b-2478-41fe-8adb-75c7629e172f name=/runtime.v1.RuntimeService/Version
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.031298423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef04c1ba-1f73-471d-b291-75a83f63fb68 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.031800575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021338031771078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef04c1ba-1f73-471d-b291-75a83f63fb68 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.032359591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab7c558d-b569-41ee-a346-2b40151ecace name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.032412727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab7c558d-b569-41ee-a346-2b40151ecace name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.032442487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ab7c558d-b569-41ee-a346-2b40151ecace name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.063699757Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=425e06ac-8289-4c15-a0d1-81c1a7794b24 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.063825937Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=425e06ac-8289-4c15-a0d1-81c1a7794b24 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.064941408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12bb88b4-b218-460d-9df5-abbd122b33fc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.065325323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021338065301707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12bb88b4-b218-460d-9df5-abbd122b33fc name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.065797890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6bafa39-9db9-4244-9f4e-07472a5c7de8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.065870086Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6bafa39-9db9-4244-9f4e-07472a5c7de8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.065910407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e6bafa39-9db9-4244-9f4e-07472a5c7de8 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.096821147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06ec12f2-cfef-47e4-a79c-845c1723ef3c name=/runtime.v1.RuntimeService/Version
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.096931408Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06ec12f2-cfef-47e4-a79c-845c1723ef3c name=/runtime.v1.RuntimeService/Version
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.098211347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ef1b0a25-136f-435f-adfd-7b6329eeeeb5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.098614696Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021338098593955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef1b0a25-136f-435f-adfd-7b6329eeeeb5 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.099152546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0b157f90-a583-47ec-b758-2940e30e0ecf name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.099211803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0b157f90-a583-47ec-b758-2940e30e0ecf name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:08:58 old-k8s-version-166693 crio[645]: time="2024-06-10 12:08:58.099267845Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0b157f90-a583-47ec-b758-2940e30e0ecf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Jun10 11:48] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052778] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039241] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.662307] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.954746] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.609904] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.687001] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.069246] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073631] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.221904] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.142650] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.284629] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +6.510984] systemd-fstab-generator[829]: Ignoring "noauto" option for root device
	[  +0.065299] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.018208] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[ +11.261041] kauditd_printk_skb: 46 callbacks suppressed
	[Jun10 11:52] systemd-fstab-generator[5086]: Ignoring "noauto" option for root device
	[Jun10 11:54] systemd-fstab-generator[5370]: Ignoring "noauto" option for root device
	[  +0.069423] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 12:08:58 up 20 min,  0 users,  load average: 0.20, 0.10, 0.04
	Linux old-k8s-version-166693 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000987ef0)
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a91ef0, 0x4f0ac20, 0xc000050320, 0x1, 0xc0001020c0)
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000278380, 0xc0001020c0)
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000670ba0, 0xc0006e26e0)
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Jun 10 12:08:57 old-k8s-version-166693 kubelet[6924]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Jun 10 12:08:57 old-k8s-version-166693 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jun 10 12:08:57 old-k8s-version-166693 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jun 10 12:08:57 old-k8s-version-166693 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 147.
	Jun 10 12:08:57 old-k8s-version-166693 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jun 10 12:08:57 old-k8s-version-166693 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jun 10 12:08:58 old-k8s-version-166693 kubelet[6967]: I0610 12:08:58.005194    6967 server.go:416] Version: v1.20.0
	Jun 10 12:08:58 old-k8s-version-166693 kubelet[6967]: I0610 12:08:58.005480    6967 server.go:837] Client rotation is on, will bootstrap in background
	Jun 10 12:08:58 old-k8s-version-166693 kubelet[6967]: I0610 12:08:58.007606    6967 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jun 10 12:08:58 old-k8s-version-166693 kubelet[6967]: W0610 12:08:58.009008    6967 manager.go:159] Cannot detect current cgroup on cgroup v2
	Jun 10 12:08:58 old-k8s-version-166693 kubelet[6967]: I0610 12:08:58.009005    6967 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166693 -n old-k8s-version-166693
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 2 (228.652589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-166693" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (187.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (167.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-06-10 12:14:08.103048635 +0000 UTC m=+6801.189080130
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-281114 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-281114 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.696µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-281114 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-281114 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-281114 logs -n 25: (2.669268391s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653 sudo cat                | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653 sudo cat                | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653 sudo cat                | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:13 UTC | 10 Jun 24 12:13 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-491653                         | enable-default-cni-491653 | jenkins | v1.33.1 | 10 Jun 24 12:14 UTC | 10 Jun 24 12:14 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 12:13:36
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 12:13:36.994100   73776 out.go:291] Setting OutFile to fd 1 ...
	I0610 12:13:36.994381   73776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:13:36.994396   73776 out.go:304] Setting ErrFile to fd 2...
	I0610 12:13:36.994402   73776 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 12:13:36.994657   73776 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 12:13:36.995854   73776 out.go:298] Setting JSON to false
	I0610 12:13:36.997101   73776 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6958,"bootTime":1718014659,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 12:13:36.997191   73776 start.go:139] virtualization: kvm guest
	I0610 12:13:36.998790   73776 out.go:177] * [bridge-491653] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 12:13:37.000574   73776 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 12:13:37.001811   73776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 12:13:37.000599   73776 notify.go:220] Checking for updates...
	I0610 12:13:37.003397   73776 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 12:13:37.004710   73776 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 12:13:37.006006   73776 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 12:13:37.007151   73776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 12:13:37.009119   73776 config.go:182] Loaded profile config "default-k8s-diff-port-281114": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:13:37.009266   73776 config.go:182] Loaded profile config "enable-default-cni-491653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:13:37.009454   73776 config.go:182] Loaded profile config "flannel-491653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:13:37.009591   73776 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 12:13:37.049827   73776 out.go:177] * Using the kvm2 driver based on user configuration
	I0610 12:13:37.051178   73776 start.go:297] selected driver: kvm2
	I0610 12:13:37.051208   73776 start.go:901] validating driver "kvm2" against <nil>
	I0610 12:13:37.051224   73776 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 12:13:37.052003   73776 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:13:37.052079   73776 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 12:13:37.072768   73776 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 12:13:37.072825   73776 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 12:13:37.073089   73776 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 12:13:37.073119   73776 cni.go:84] Creating CNI manager for "bridge"
	I0610 12:13:37.073127   73776 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 12:13:37.073189   73776 start.go:340] cluster config:
	{Name:bridge-491653 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:bridge-491653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 12:13:37.073300   73776 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 12:13:37.074913   73776 out.go:177] * Starting "bridge-491653" primary control-plane node in "bridge-491653" cluster
	I0610 12:13:37.076663   73776 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 12:13:37.076703   73776 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 12:13:37.076727   73776 cache.go:56] Caching tarball of preloaded images
	I0610 12:13:37.076786   73776 preload.go:173] Found /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0610 12:13:37.076798   73776 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0610 12:13:37.076905   73776 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/bridge-491653/config.json ...
	I0610 12:13:37.076931   73776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/bridge-491653/config.json: {Name:mk8c62460aa6e655ad700101c0adaa756567cf99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:13:37.077113   73776 start.go:360] acquireMachinesLock for bridge-491653: {Name:mk97d1bc660650f670f4180e5028cc6bf11b1450 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 12:13:37.077147   73776 start.go:364] duration metric: took 18.226µs to acquireMachinesLock for "bridge-491653"
	I0610 12:13:37.077170   73776 start.go:93] Provisioning new machine with config: &{Name:bridge-491653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:bridge-491653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 12:13:37.077257   73776 start.go:125] createHost starting for "" (driver="kvm2")
	I0610 12:13:37.079601   73776 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 12:13:37.079779   73776 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:13:37.079816   73776 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:13:37.096122   73776 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34187
	I0610 12:13:37.096659   73776 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:13:37.097316   73776 main.go:141] libmachine: Using API Version  1
	I0610 12:13:37.097352   73776 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:13:37.097752   73776 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:13:37.098019   73776 main.go:141] libmachine: (bridge-491653) Calling .GetMachineName
	I0610 12:13:37.098214   73776 main.go:141] libmachine: (bridge-491653) Calling .DriverName
	I0610 12:13:37.098384   73776 start.go:159] libmachine.API.Create for "bridge-491653" (driver="kvm2")
	I0610 12:13:37.098416   73776 client.go:168] LocalClient.Create starting
	I0610 12:13:37.098452   73776 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem
	I0610 12:13:37.098488   73776 main.go:141] libmachine: Decoding PEM data...
	I0610 12:13:37.098507   73776 main.go:141] libmachine: Parsing certificate...
	I0610 12:13:37.098578   73776 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem
	I0610 12:13:37.098603   73776 main.go:141] libmachine: Decoding PEM data...
	I0610 12:13:37.098622   73776 main.go:141] libmachine: Parsing certificate...
	I0610 12:13:37.098646   73776 main.go:141] libmachine: Running pre-create checks...
	I0610 12:13:37.098660   73776 main.go:141] libmachine: (bridge-491653) Calling .PreCreateCheck
	I0610 12:13:37.099038   73776 main.go:141] libmachine: (bridge-491653) Calling .GetConfigRaw
	I0610 12:13:37.099554   73776 main.go:141] libmachine: Creating machine...
	I0610 12:13:37.099576   73776 main.go:141] libmachine: (bridge-491653) Calling .Create
	I0610 12:13:37.099729   73776 main.go:141] libmachine: (bridge-491653) Creating KVM machine...
	I0610 12:13:37.101329   73776 main.go:141] libmachine: (bridge-491653) DBG | found existing default KVM network
	I0610 12:13:37.103107   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:37.102957   73799 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d390}
	I0610 12:13:37.103153   73776 main.go:141] libmachine: (bridge-491653) DBG | created network xml: 
	I0610 12:13:37.103168   73776 main.go:141] libmachine: (bridge-491653) DBG | <network>
	I0610 12:13:37.103189   73776 main.go:141] libmachine: (bridge-491653) DBG |   <name>mk-bridge-491653</name>
	I0610 12:13:37.103198   73776 main.go:141] libmachine: (bridge-491653) DBG |   <dns enable='no'/>
	I0610 12:13:37.103224   73776 main.go:141] libmachine: (bridge-491653) DBG |   
	I0610 12:13:37.103234   73776 main.go:141] libmachine: (bridge-491653) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0610 12:13:37.103244   73776 main.go:141] libmachine: (bridge-491653) DBG |     <dhcp>
	I0610 12:13:37.103252   73776 main.go:141] libmachine: (bridge-491653) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0610 12:13:37.103266   73776 main.go:141] libmachine: (bridge-491653) DBG |     </dhcp>
	I0610 12:13:37.103276   73776 main.go:141] libmachine: (bridge-491653) DBG |   </ip>
	I0610 12:13:37.103285   73776 main.go:141] libmachine: (bridge-491653) DBG |   
	I0610 12:13:37.103292   73776 main.go:141] libmachine: (bridge-491653) DBG | </network>
	I0610 12:13:37.103301   73776 main.go:141] libmachine: (bridge-491653) DBG | 
	I0610 12:13:37.108669   73776 main.go:141] libmachine: (bridge-491653) DBG | trying to create private KVM network mk-bridge-491653 192.168.39.0/24...
	I0610 12:13:37.229901   73776 main.go:141] libmachine: (bridge-491653) DBG | private KVM network mk-bridge-491653 192.168.39.0/24 created
	I0610 12:13:37.230490   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:37.230122   73799 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 12:13:37.230534   73776 main.go:141] libmachine: (bridge-491653) Setting up store path in /home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653 ...
	I0610 12:13:37.230554   73776 main.go:141] libmachine: (bridge-491653) Building disk image from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 12:13:37.230580   73776 main.go:141] libmachine: (bridge-491653) Downloading /home/jenkins/minikube-integration/19046-3880/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso...
	I0610 12:13:37.571726   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:37.571615   73799 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/id_rsa...
	I0610 12:13:37.937354   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:37.937178   73799 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/bridge-491653.rawdisk...
	I0610 12:13:37.937397   73776 main.go:141] libmachine: (bridge-491653) DBG | Writing magic tar header
	I0610 12:13:37.937411   73776 main.go:141] libmachine: (bridge-491653) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653 (perms=drwx------)
	I0610 12:13:37.937429   73776 main.go:141] libmachine: (bridge-491653) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube/machines (perms=drwxr-xr-x)
	I0610 12:13:37.937442   73776 main.go:141] libmachine: (bridge-491653) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880/.minikube (perms=drwxr-xr-x)
	I0610 12:13:37.937451   73776 main.go:141] libmachine: (bridge-491653) DBG | Writing SSH key tar header
	I0610 12:13:37.937466   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:37.937293   73799 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653 ...
	I0610 12:13:37.937475   73776 main.go:141] libmachine: (bridge-491653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653
	I0610 12:13:37.937486   73776 main.go:141] libmachine: (bridge-491653) Setting executable bit set on /home/jenkins/minikube-integration/19046-3880 (perms=drwxrwxr-x)
	I0610 12:13:37.937500   73776 main.go:141] libmachine: (bridge-491653) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0610 12:13:37.937509   73776 main.go:141] libmachine: (bridge-491653) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0610 12:13:37.937520   73776 main.go:141] libmachine: (bridge-491653) Creating domain...
	I0610 12:13:37.937565   73776 main.go:141] libmachine: (bridge-491653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube/machines
	I0610 12:13:37.937592   73776 main.go:141] libmachine: (bridge-491653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 12:13:37.937604   73776 main.go:141] libmachine: (bridge-491653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19046-3880
	I0610 12:13:37.937618   73776 main.go:141] libmachine: (bridge-491653) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0610 12:13:37.937627   73776 main.go:141] libmachine: (bridge-491653) DBG | Checking permissions on dir: /home/jenkins
	I0610 12:13:37.937640   73776 main.go:141] libmachine: (bridge-491653) DBG | Checking permissions on dir: /home
	I0610 12:13:37.937660   73776 main.go:141] libmachine: (bridge-491653) DBG | Skipping /home - not owner
	I0610 12:13:37.941278   73776 main.go:141] libmachine: (bridge-491653) define libvirt domain using xml: 
	I0610 12:13:37.941308   73776 main.go:141] libmachine: (bridge-491653) <domain type='kvm'>
	I0610 12:13:37.941320   73776 main.go:141] libmachine: (bridge-491653)   <name>bridge-491653</name>
	I0610 12:13:37.941328   73776 main.go:141] libmachine: (bridge-491653)   <memory unit='MiB'>3072</memory>
	I0610 12:13:37.941337   73776 main.go:141] libmachine: (bridge-491653)   <vcpu>2</vcpu>
	I0610 12:13:37.941344   73776 main.go:141] libmachine: (bridge-491653)   <features>
	I0610 12:13:37.941353   73776 main.go:141] libmachine: (bridge-491653)     <acpi/>
	I0610 12:13:37.941359   73776 main.go:141] libmachine: (bridge-491653)     <apic/>
	I0610 12:13:37.941367   73776 main.go:141] libmachine: (bridge-491653)     <pae/>
	I0610 12:13:37.941376   73776 main.go:141] libmachine: (bridge-491653)     
	I0610 12:13:37.941384   73776 main.go:141] libmachine: (bridge-491653)   </features>
	I0610 12:13:37.941392   73776 main.go:141] libmachine: (bridge-491653)   <cpu mode='host-passthrough'>
	I0610 12:13:37.941400   73776 main.go:141] libmachine: (bridge-491653)   
	I0610 12:13:37.941406   73776 main.go:141] libmachine: (bridge-491653)   </cpu>
	I0610 12:13:37.941415   73776 main.go:141] libmachine: (bridge-491653)   <os>
	I0610 12:13:37.941427   73776 main.go:141] libmachine: (bridge-491653)     <type>hvm</type>
	I0610 12:13:37.941436   73776 main.go:141] libmachine: (bridge-491653)     <boot dev='cdrom'/>
	I0610 12:13:37.941447   73776 main.go:141] libmachine: (bridge-491653)     <boot dev='hd'/>
	I0610 12:13:37.941456   73776 main.go:141] libmachine: (bridge-491653)     <bootmenu enable='no'/>
	I0610 12:13:37.941462   73776 main.go:141] libmachine: (bridge-491653)   </os>
	I0610 12:13:37.941469   73776 main.go:141] libmachine: (bridge-491653)   <devices>
	I0610 12:13:37.941481   73776 main.go:141] libmachine: (bridge-491653)     <disk type='file' device='cdrom'>
	I0610 12:13:37.941493   73776 main.go:141] libmachine: (bridge-491653)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/boot2docker.iso'/>
	I0610 12:13:37.941503   73776 main.go:141] libmachine: (bridge-491653)       <target dev='hdc' bus='scsi'/>
	I0610 12:13:37.941511   73776 main.go:141] libmachine: (bridge-491653)       <readonly/>
	I0610 12:13:37.941518   73776 main.go:141] libmachine: (bridge-491653)     </disk>
	I0610 12:13:37.941528   73776 main.go:141] libmachine: (bridge-491653)     <disk type='file' device='disk'>
	I0610 12:13:37.941537   73776 main.go:141] libmachine: (bridge-491653)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0610 12:13:37.941550   73776 main.go:141] libmachine: (bridge-491653)       <source file='/home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/bridge-491653.rawdisk'/>
	I0610 12:13:37.941558   73776 main.go:141] libmachine: (bridge-491653)       <target dev='hda' bus='virtio'/>
	I0610 12:13:37.941566   73776 main.go:141] libmachine: (bridge-491653)     </disk>
	I0610 12:13:37.941578   73776 main.go:141] libmachine: (bridge-491653)     <interface type='network'>
	I0610 12:13:37.941588   73776 main.go:141] libmachine: (bridge-491653)       <source network='mk-bridge-491653'/>
	I0610 12:13:37.941595   73776 main.go:141] libmachine: (bridge-491653)       <model type='virtio'/>
	I0610 12:13:37.941603   73776 main.go:141] libmachine: (bridge-491653)     </interface>
	I0610 12:13:37.941610   73776 main.go:141] libmachine: (bridge-491653)     <interface type='network'>
	I0610 12:13:37.941621   73776 main.go:141] libmachine: (bridge-491653)       <source network='default'/>
	I0610 12:13:37.941628   73776 main.go:141] libmachine: (bridge-491653)       <model type='virtio'/>
	I0610 12:13:37.941636   73776 main.go:141] libmachine: (bridge-491653)     </interface>
	I0610 12:13:37.941644   73776 main.go:141] libmachine: (bridge-491653)     <serial type='pty'>
	I0610 12:13:37.941675   73776 main.go:141] libmachine: (bridge-491653)       <target port='0'/>
	I0610 12:13:37.941697   73776 main.go:141] libmachine: (bridge-491653)     </serial>
	I0610 12:13:37.941713   73776 main.go:141] libmachine: (bridge-491653)     <console type='pty'>
	I0610 12:13:37.941738   73776 main.go:141] libmachine: (bridge-491653)       <target type='serial' port='0'/>
	I0610 12:13:37.941746   73776 main.go:141] libmachine: (bridge-491653)     </console>
	I0610 12:13:37.941753   73776 main.go:141] libmachine: (bridge-491653)     <rng model='virtio'>
	I0610 12:13:37.941764   73776 main.go:141] libmachine: (bridge-491653)       <backend model='random'>/dev/random</backend>
	I0610 12:13:37.941770   73776 main.go:141] libmachine: (bridge-491653)     </rng>
	I0610 12:13:37.941777   73776 main.go:141] libmachine: (bridge-491653)     
	I0610 12:13:37.941783   73776 main.go:141] libmachine: (bridge-491653)     
	I0610 12:13:37.941791   73776 main.go:141] libmachine: (bridge-491653)   </devices>
	I0610 12:13:37.941797   73776 main.go:141] libmachine: (bridge-491653) </domain>
	I0610 12:13:37.941815   73776 main.go:141] libmachine: (bridge-491653) 
	I0610 12:13:37.946637   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:c0:5e:8c in network default
	I0610 12:13:37.947363   73776 main.go:141] libmachine: (bridge-491653) Ensuring networks are active...
	I0610 12:13:37.947386   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:37.948212   73776 main.go:141] libmachine: (bridge-491653) Ensuring network default is active
	I0610 12:13:37.948620   73776 main.go:141] libmachine: (bridge-491653) Ensuring network mk-bridge-491653 is active
	I0610 12:13:37.949324   73776 main.go:141] libmachine: (bridge-491653) Getting domain xml...
	I0610 12:13:37.950398   73776 main.go:141] libmachine: (bridge-491653) Creating domain...
	I0610 12:13:39.390170   73776 main.go:141] libmachine: (bridge-491653) Waiting to get IP...
	I0610 12:13:39.391027   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:39.391503   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:39.391536   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:39.391469   73799 retry.go:31] will retry after 275.448979ms: waiting for machine to come up
	I0610 12:13:39.669270   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:39.669771   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:39.669805   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:39.669746   73799 retry.go:31] will retry after 301.060494ms: waiting for machine to come up
	I0610 12:13:39.972187   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:39.972718   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:39.972747   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:39.972675   73799 retry.go:31] will retry after 455.370451ms: waiting for machine to come up
	I0610 12:13:40.429347   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:40.429916   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:40.429944   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:40.429876   73799 retry.go:31] will retry after 435.900042ms: waiting for machine to come up
	I0610 12:13:40.867019   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:40.867519   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:40.867545   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:40.867470   73799 retry.go:31] will retry after 750.097334ms: waiting for machine to come up
	I0610 12:13:41.619437   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:41.619939   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:41.619960   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:41.619875   73799 retry.go:31] will retry after 738.933884ms: waiting for machine to come up
	I0610 12:13:43.966221   71982 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0610 12:13:43.966292   71982 kubeadm.go:309] [preflight] Running pre-flight checks
	I0610 12:13:43.966385   71982 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 12:13:43.966498   71982 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 12:13:43.966621   71982 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 12:13:43.966729   71982 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 12:13:43.968520   71982 out.go:204]   - Generating certificates and keys ...
	I0610 12:13:43.968611   71982 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0610 12:13:43.968668   71982 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0610 12:13:43.968769   71982 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 12:13:43.968884   71982 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0610 12:13:43.968990   71982 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0610 12:13:43.969061   71982 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0610 12:13:43.969141   71982 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0610 12:13:43.969302   71982 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [flannel-491653 localhost] and IPs [192.168.72.233 127.0.0.1 ::1]
	I0610 12:13:43.969394   71982 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0610 12:13:43.969578   71982 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [flannel-491653 localhost] and IPs [192.168.72.233 127.0.0.1 ::1]
	I0610 12:13:43.969664   71982 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 12:13:43.969735   71982 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 12:13:43.969798   71982 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0610 12:13:43.969879   71982 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 12:13:43.969960   71982 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 12:13:43.970054   71982 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0610 12:13:43.970138   71982 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 12:13:43.970205   71982 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 12:13:43.970251   71982 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 12:13:43.970346   71982 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 12:13:43.970418   71982 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 12:13:43.973110   71982 out.go:204]   - Booting up control plane ...
	I0610 12:13:43.973245   71982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 12:13:43.973370   71982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 12:13:43.973451   71982 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 12:13:43.973605   71982 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 12:13:43.973730   71982 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 12:13:43.973781   71982 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0610 12:13:43.973927   71982 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0610 12:13:43.974050   71982 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0610 12:13:43.974137   71982 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.000988214s
	I0610 12:13:43.974269   71982 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0610 12:13:43.974337   71982 kubeadm.go:309] [api-check] The API server is healthy after 6.002721515s
	I0610 12:13:43.974462   71982 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 12:13:43.974619   71982 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 12:13:43.974705   71982 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 12:13:43.974927   71982 kubeadm.go:309] [mark-control-plane] Marking the node flannel-491653 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 12:13:43.975070   71982 kubeadm.go:309] [bootstrap-token] Using token: b9jxcm.t05yod4tvnq0mo56
	I0610 12:13:43.977177   71982 out.go:204]   - Configuring RBAC rules ...
	I0610 12:13:43.977319   71982 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 12:13:43.977470   71982 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 12:13:43.977680   71982 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 12:13:43.977831   71982 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 12:13:43.977936   71982 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 12:13:43.978044   71982 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 12:13:43.978148   71982 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 12:13:43.978222   71982 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0610 12:13:43.978275   71982 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0610 12:13:43.978283   71982 kubeadm.go:309] 
	I0610 12:13:43.978328   71982 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0610 12:13:43.978337   71982 kubeadm.go:309] 
	I0610 12:13:43.978397   71982 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0610 12:13:43.978403   71982 kubeadm.go:309] 
	I0610 12:13:43.978454   71982 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0610 12:13:43.978529   71982 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 12:13:43.978592   71982 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 12:13:43.978603   71982 kubeadm.go:309] 
	I0610 12:13:43.978663   71982 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0610 12:13:43.978676   71982 kubeadm.go:309] 
	I0610 12:13:43.978741   71982 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 12:13:43.978751   71982 kubeadm.go:309] 
	I0610 12:13:43.978807   71982 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0610 12:13:43.978879   71982 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 12:13:43.978954   71982 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 12:13:43.978961   71982 kubeadm.go:309] 
	I0610 12:13:43.979066   71982 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 12:13:43.979129   71982 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0610 12:13:43.979135   71982 kubeadm.go:309] 
	I0610 12:13:43.979255   71982 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token b9jxcm.t05yod4tvnq0mo56 \
	I0610 12:13:43.979404   71982 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e \
	I0610 12:13:43.979425   71982 kubeadm.go:309] 	--control-plane 
	I0610 12:13:43.979430   71982 kubeadm.go:309] 
	I0610 12:13:43.979494   71982 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0610 12:13:43.979504   71982 kubeadm.go:309] 
	I0610 12:13:43.979613   71982 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token b9jxcm.t05yod4tvnq0mo56 \
	I0610 12:13:43.979765   71982 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:f6bd6fc2cfbe2581ca2cff66e225691b71bb7d2faca164d22650fa3f750ee57e 
	I0610 12:13:43.979793   71982 cni.go:84] Creating CNI manager for "flannel"
	I0610 12:13:43.981534   71982 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0610 12:13:43.983138   71982 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0610 12:13:43.988834   71982 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0610 12:13:43.988856   71982 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0610 12:13:44.013722   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0610 12:13:44.493693   71982 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 12:13:44.493858   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:44.493861   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-491653 minikube.k8s.io/updated_at=2024_06_10T12_13_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959 minikube.k8s.io/name=flannel-491653 minikube.k8s.io/primary=true
	I0610 12:13:44.535076   71982 ops.go:34] apiserver oom_adj: -16
	I0610 12:13:44.657516   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:45.157751   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:45.658417   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:42.359987   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:42.360579   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:42.360602   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:42.360539   73799 retry.go:31] will retry after 1.161872936s: waiting for machine to come up
	I0610 12:13:43.524045   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:43.524592   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:43.524622   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:43.524530   73799 retry.go:31] will retry after 1.336316917s: waiting for machine to come up
	I0610 12:13:44.863090   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:44.863709   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:44.863737   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:44.863664   73799 retry.go:31] will retry after 1.119552935s: waiting for machine to come up
	I0610 12:13:45.985051   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:45.985619   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:45.985650   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:45.985573   73799 retry.go:31] will retry after 1.863072751s: waiting for machine to come up
	I0610 12:13:46.158029   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:46.658156   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:47.158151   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:47.658265   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:48.157639   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:48.657947   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:49.158175   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:49.657821   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:50.158198   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:50.658426   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:47.850138   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:47.850719   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:47.850754   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:47.850662   73799 retry.go:31] will retry after 2.808883589s: waiting for machine to come up
	I0610 12:13:50.662175   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:50.662676   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:50.662702   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:50.662641   73799 retry.go:31] will retry after 2.463281358s: waiting for machine to come up
	I0610 12:13:51.158583   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:51.657654   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:52.158126   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:52.657541   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:53.158123   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:53.657873   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:54.157623   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:54.658081   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:55.157551   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:55.658179   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:53.127742   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:53.128302   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find current IP address of domain bridge-491653 in network mk-bridge-491653
	I0610 12:13:53.128336   73776 main.go:141] libmachine: (bridge-491653) DBG | I0610 12:13:53.128287   73799 retry.go:31] will retry after 4.49094012s: waiting for machine to come up
	I0610 12:13:56.158463   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:56.657577   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:57.158468   71982 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 12:13:57.267799   71982 kubeadm.go:1107] duration metric: took 12.774017556s to wait for elevateKubeSystemPrivileges
	W0610 12:13:57.267838   71982 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0610 12:13:57.267848   71982 kubeadm.go:393] duration metric: took 25.600428847s to StartCluster
	I0610 12:13:57.267872   71982 settings.go:142] acquiring lock: {Name:mk00410f6b6051b7558c7a348cc8c9f1c35c7547 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:13:57.267942   71982 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 12:13:57.269783   71982 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/kubeconfig: {Name:mk6bc087e599296d9e4a696a021944fac20ee98b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 12:13:57.270082   71982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 12:13:57.270098   71982 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.72.233 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0610 12:13:57.271923   71982 out.go:177] * Verifying Kubernetes components...
	I0610 12:13:57.270180   71982 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0610 12:13:57.270309   71982 config.go:182] Loaded profile config "flannel-491653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:13:57.273167   71982 addons.go:69] Setting storage-provisioner=true in profile "flannel-491653"
	I0610 12:13:57.273187   71982 addons.go:69] Setting default-storageclass=true in profile "flannel-491653"
	I0610 12:13:57.273217   71982 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-491653"
	I0610 12:13:57.273219   71982 addons.go:234] Setting addon storage-provisioner=true in "flannel-491653"
	I0610 12:13:57.273254   71982 host.go:66] Checking if "flannel-491653" exists ...
	I0610 12:13:57.273177   71982 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:13:57.273670   71982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:13:57.273672   71982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:13:57.273690   71982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:13:57.273694   71982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:13:57.292512   71982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I0610 12:13:57.293090   71982 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:13:57.293674   71982 main.go:141] libmachine: Using API Version  1
	I0610 12:13:57.293704   71982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:13:57.294077   71982 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:13:57.294697   71982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:13:57.294726   71982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:13:57.296166   71982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40759
	I0610 12:13:57.296510   71982 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:13:57.297100   71982 main.go:141] libmachine: Using API Version  1
	I0610 12:13:57.297118   71982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:13:57.297444   71982 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:13:57.297625   71982 main.go:141] libmachine: (flannel-491653) Calling .GetState
	I0610 12:13:57.301306   71982 addons.go:234] Setting addon default-storageclass=true in "flannel-491653"
	I0610 12:13:57.301347   71982 host.go:66] Checking if "flannel-491653" exists ...
	I0610 12:13:57.301714   71982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:13:57.301734   71982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:13:57.312715   71982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34303
	I0610 12:13:57.313260   71982 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:13:57.313732   71982 main.go:141] libmachine: Using API Version  1
	I0610 12:13:57.313748   71982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:13:57.314161   71982 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:13:57.314338   71982 main.go:141] libmachine: (flannel-491653) Calling .GetState
	I0610 12:13:57.316127   71982 main.go:141] libmachine: (flannel-491653) Calling .DriverName
	I0610 12:13:57.318562   71982 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 12:13:57.319249   71982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I0610 12:13:57.319931   71982 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:13:57.319947   71982 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 12:13:57.319965   71982 main.go:141] libmachine: (flannel-491653) Calling .GetSSHHostname
	I0610 12:13:57.320411   71982 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:13:57.320894   71982 main.go:141] libmachine: Using API Version  1
	I0610 12:13:57.320916   71982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:13:57.321269   71982 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:13:57.321717   71982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 12:13:57.321752   71982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 12:13:57.322785   71982 main.go:141] libmachine: (flannel-491653) DBG | domain flannel-491653 has defined MAC address 52:54:00:95:17:6b in network mk-flannel-491653
	I0610 12:13:57.323173   71982 main.go:141] libmachine: (flannel-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:17:6b", ip: ""} in network mk-flannel-491653: {Iface:virbr3 ExpiryTime:2024-06-10 13:13:15 +0000 UTC Type:0 Mac:52:54:00:95:17:6b Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:flannel-491653 Clientid:01:52:54:00:95:17:6b}
	I0610 12:13:57.323193   71982 main.go:141] libmachine: (flannel-491653) DBG | domain flannel-491653 has defined IP address 192.168.72.233 and MAC address 52:54:00:95:17:6b in network mk-flannel-491653
	I0610 12:13:57.323358   71982 main.go:141] libmachine: (flannel-491653) Calling .GetSSHPort
	I0610 12:13:57.323571   71982 main.go:141] libmachine: (flannel-491653) Calling .GetSSHKeyPath
	I0610 12:13:57.323695   71982 main.go:141] libmachine: (flannel-491653) Calling .GetSSHUsername
	I0610 12:13:57.323865   71982 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/flannel-491653/id_rsa Username:docker}
	I0610 12:13:57.337130   71982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41095
	I0610 12:13:57.337493   71982 main.go:141] libmachine: () Calling .GetVersion
	I0610 12:13:57.337962   71982 main.go:141] libmachine: Using API Version  1
	I0610 12:13:57.337978   71982 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 12:13:57.338257   71982 main.go:141] libmachine: () Calling .GetMachineName
	I0610 12:13:57.338422   71982 main.go:141] libmachine: (flannel-491653) Calling .GetState
	I0610 12:13:57.340042   71982 main.go:141] libmachine: (flannel-491653) Calling .DriverName
	I0610 12:13:57.340272   71982 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 12:13:57.340284   71982 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 12:13:57.340299   71982 main.go:141] libmachine: (flannel-491653) Calling .GetSSHHostname
	I0610 12:13:57.343414   71982 main.go:141] libmachine: (flannel-491653) DBG | domain flannel-491653 has defined MAC address 52:54:00:95:17:6b in network mk-flannel-491653
	I0610 12:13:57.343722   71982 main.go:141] libmachine: (flannel-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:95:17:6b", ip: ""} in network mk-flannel-491653: {Iface:virbr3 ExpiryTime:2024-06-10 13:13:15 +0000 UTC Type:0 Mac:52:54:00:95:17:6b Iaid: IPaddr:192.168.72.233 Prefix:24 Hostname:flannel-491653 Clientid:01:52:54:00:95:17:6b}
	I0610 12:13:57.343744   71982 main.go:141] libmachine: (flannel-491653) DBG | domain flannel-491653 has defined IP address 192.168.72.233 and MAC address 52:54:00:95:17:6b in network mk-flannel-491653
	I0610 12:13:57.343828   71982 main.go:141] libmachine: (flannel-491653) Calling .GetSSHPort
	I0610 12:13:57.343982   71982 main.go:141] libmachine: (flannel-491653) Calling .GetSSHKeyPath
	I0610 12:13:57.344092   71982 main.go:141] libmachine: (flannel-491653) Calling .GetSSHUsername
	I0610 12:13:57.344199   71982 sshutil.go:53] new ssh client: &{IP:192.168.72.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/flannel-491653/id_rsa Username:docker}
	I0610 12:13:57.546985   71982 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0610 12:13:57.547218   71982 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 12:13:57.565313   71982 node_ready.go:35] waiting up to 15m0s for node "flannel-491653" to be "Ready" ...
	I0610 12:13:57.595719   71982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 12:13:57.675049   71982 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 12:13:57.973572   71982 start.go:946] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0610 12:13:57.973631   71982 main.go:141] libmachine: Making call to close driver server
	I0610 12:13:57.973650   71982 main.go:141] libmachine: (flannel-491653) Calling .Close
	I0610 12:13:57.974017   71982 main.go:141] libmachine: (flannel-491653) DBG | Closing plugin on server side
	I0610 12:13:57.974017   71982 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:13:57.974039   71982 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:13:57.974048   71982 main.go:141] libmachine: Making call to close driver server
	I0610 12:13:57.974055   71982 main.go:141] libmachine: (flannel-491653) Calling .Close
	I0610 12:13:57.975829   71982 main.go:141] libmachine: (flannel-491653) DBG | Closing plugin on server side
	I0610 12:13:57.975837   71982 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:13:57.975851   71982 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:13:57.982620   71982 main.go:141] libmachine: Making call to close driver server
	I0610 12:13:57.982642   71982 main.go:141] libmachine: (flannel-491653) Calling .Close
	I0610 12:13:57.982889   71982 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:13:57.982903   71982 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:13:58.281375   71982 main.go:141] libmachine: Making call to close driver server
	I0610 12:13:58.281403   71982 main.go:141] libmachine: (flannel-491653) Calling .Close
	I0610 12:13:58.281727   71982 main.go:141] libmachine: (flannel-491653) DBG | Closing plugin on server side
	I0610 12:13:58.281766   71982 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:13:58.281786   71982 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:13:58.281799   71982 main.go:141] libmachine: Making call to close driver server
	I0610 12:13:58.281810   71982 main.go:141] libmachine: (flannel-491653) Calling .Close
	I0610 12:13:58.282079   71982 main.go:141] libmachine: (flannel-491653) DBG | Closing plugin on server side
	I0610 12:13:58.282116   71982 main.go:141] libmachine: Successfully made call to close driver server
	I0610 12:13:58.282124   71982 main.go:141] libmachine: Making call to close connection to plugin binary
	I0610 12:13:58.284078   71982 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0610 12:13:58.285189   71982 addons.go:510] duration metric: took 1.015006043s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0610 12:13:58.502246   71982 kapi.go:248] "coredns" deployment in "kube-system" namespace and "flannel-491653" context rescaled to 1 replicas
	I0610 12:13:59.568652   71982 node_ready.go:53] node "flannel-491653" has status "Ready":"False"
	I0610 12:13:57.620456   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:57.621132   73776 main.go:141] libmachine: (bridge-491653) Found IP for machine: 192.168.39.171
	I0610 12:13:57.621151   73776 main.go:141] libmachine: (bridge-491653) Reserving static IP address...
	I0610 12:13:57.621170   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has current primary IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:57.621462   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find host DHCP lease matching {name: "bridge-491653", mac: "52:54:00:63:2e:76", ip: "192.168.39.171"} in network mk-bridge-491653
	I0610 12:13:57.712524   73776 main.go:141] libmachine: (bridge-491653) DBG | Getting to WaitForSSH function...
	I0610 12:13:57.712555   73776 main.go:141] libmachine: (bridge-491653) Reserved static IP address: 192.168.39.171
	I0610 12:13:57.712565   73776 main.go:141] libmachine: (bridge-491653) Waiting for SSH to be available...
	I0610 12:13:57.716464   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:13:57.716854   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653
	I0610 12:13:57.716877   73776 main.go:141] libmachine: (bridge-491653) DBG | unable to find defined IP address of network mk-bridge-491653 interface with MAC address 52:54:00:63:2e:76
	I0610 12:13:57.717160   73776 main.go:141] libmachine: (bridge-491653) DBG | Using SSH client type: external
	I0610 12:13:57.717181   73776 main.go:141] libmachine: (bridge-491653) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/id_rsa (-rw-------)
	I0610 12:13:57.717210   73776 main.go:141] libmachine: (bridge-491653) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 12:13:57.717221   73776 main.go:141] libmachine: (bridge-491653) DBG | About to run SSH command:
	I0610 12:13:57.717232   73776 main.go:141] libmachine: (bridge-491653) DBG | exit 0
	I0610 12:13:57.721777   73776 main.go:141] libmachine: (bridge-491653) DBG | SSH cmd err, output: exit status 255: 
	I0610 12:13:57.721797   73776 main.go:141] libmachine: (bridge-491653) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0610 12:13:57.721808   73776 main.go:141] libmachine: (bridge-491653) DBG | command : exit 0
	I0610 12:13:57.721815   73776 main.go:141] libmachine: (bridge-491653) DBG | err     : exit status 255
	I0610 12:13:57.721825   73776 main.go:141] libmachine: (bridge-491653) DBG | output  : 
	I0610 12:14:00.723918   73776 main.go:141] libmachine: (bridge-491653) DBG | Getting to WaitForSSH function...
	I0610 12:14:00.874817   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:00.875311   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:00.875345   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:00.875513   73776 main.go:141] libmachine: (bridge-491653) DBG | Using SSH client type: external
	I0610 12:14:00.875541   73776 main.go:141] libmachine: (bridge-491653) DBG | Using SSH private key: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/id_rsa (-rw-------)
	I0610 12:14:00.875570   73776 main.go:141] libmachine: (bridge-491653) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.171 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0610 12:14:00.875591   73776 main.go:141] libmachine: (bridge-491653) DBG | About to run SSH command:
	I0610 12:14:00.875621   73776 main.go:141] libmachine: (bridge-491653) DBG | exit 0
	I0610 12:14:01.001453   73776 main.go:141] libmachine: (bridge-491653) DBG | SSH cmd err, output: <nil>: 
	I0610 12:14:01.001756   73776 main.go:141] libmachine: (bridge-491653) KVM machine creation complete!
	I0610 12:14:01.002130   73776 main.go:141] libmachine: (bridge-491653) Calling .GetConfigRaw
	I0610 12:14:01.002762   73776 main.go:141] libmachine: (bridge-491653) Calling .DriverName
	I0610 12:14:01.002975   73776 main.go:141] libmachine: (bridge-491653) Calling .DriverName
	I0610 12:14:01.003147   73776 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0610 12:14:01.003162   73776 main.go:141] libmachine: (bridge-491653) Calling .GetState
	I0610 12:14:01.004632   73776 main.go:141] libmachine: Detecting operating system of created instance...
	I0610 12:14:01.004649   73776 main.go:141] libmachine: Waiting for SSH to be available...
	I0610 12:14:01.004657   73776 main.go:141] libmachine: Getting to WaitForSSH function...
	I0610 12:14:01.004664   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHHostname
	I0610 12:14:01.007395   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.007784   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:01.007813   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.007911   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHPort
	I0610 12:14:01.008092   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:01.008223   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:01.008340   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHUsername
	I0610 12:14:01.008536   73776 main.go:141] libmachine: Using SSH client type: native
	I0610 12:14:01.008835   73776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0610 12:14:01.008851   73776 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0610 12:14:01.116226   73776 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:14:01.116254   73776 main.go:141] libmachine: Detecting the provisioner...
	I0610 12:14:01.116264   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHHostname
	I0610 12:14:01.118903   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.119209   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:01.119242   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.119385   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHPort
	I0610 12:14:01.119575   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:01.119755   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:01.119910   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHUsername
	I0610 12:14:01.120071   73776 main.go:141] libmachine: Using SSH client type: native
	I0610 12:14:01.120226   73776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0610 12:14:01.120238   73776 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0610 12:14:01.229478   73776 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0610 12:14:01.229529   73776 main.go:141] libmachine: found compatible host: buildroot
	I0610 12:14:01.229535   73776 main.go:141] libmachine: Provisioning with buildroot...
	I0610 12:14:01.229543   73776 main.go:141] libmachine: (bridge-491653) Calling .GetMachineName
	I0610 12:14:01.229813   73776 buildroot.go:166] provisioning hostname "bridge-491653"
	I0610 12:14:01.229838   73776 main.go:141] libmachine: (bridge-491653) Calling .GetMachineName
	I0610 12:14:01.230030   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHHostname
	I0610 12:14:01.232540   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.232838   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:01.232871   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.233085   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHPort
	I0610 12:14:01.233280   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:01.233441   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:01.233578   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHUsername
	I0610 12:14:01.233737   73776 main.go:141] libmachine: Using SSH client type: native
	I0610 12:14:01.233936   73776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0610 12:14:01.233966   73776 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-491653 && echo "bridge-491653" | sudo tee /etc/hostname
	I0610 12:14:01.355122   73776 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-491653
	
	I0610 12:14:01.355158   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHHostname
	I0610 12:14:01.358256   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.358544   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:01.358570   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.358801   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHPort
	I0610 12:14:01.358974   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:01.359115   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:01.359265   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHUsername
	I0610 12:14:01.359415   73776 main.go:141] libmachine: Using SSH client type: native
	I0610 12:14:01.359562   73776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0610 12:14:01.359578   73776 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-491653' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-491653/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-491653' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 12:14:01.477747   73776 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 12:14:01.477775   73776 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19046-3880/.minikube CaCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19046-3880/.minikube}
	I0610 12:14:01.477791   73776 buildroot.go:174] setting up certificates
	I0610 12:14:01.477809   73776 provision.go:84] configureAuth start
	I0610 12:14:01.477822   73776 main.go:141] libmachine: (bridge-491653) Calling .GetMachineName
	I0610 12:14:01.478223   73776 main.go:141] libmachine: (bridge-491653) Calling .GetIP
	I0610 12:14:01.481167   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.481538   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:01.481569   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.481694   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHHostname
	I0610 12:14:01.483933   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.484301   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:01.484334   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.484438   73776 provision.go:143] copyHostCerts
	I0610 12:14:01.484503   73776 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem, removing ...
	I0610 12:14:01.484513   73776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem
	I0610 12:14:01.484585   73776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/key.pem (1675 bytes)
	I0610 12:14:01.484665   73776 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem, removing ...
	I0610 12:14:01.484674   73776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem
	I0610 12:14:01.484696   73776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/ca.pem (1082 bytes)
	I0610 12:14:01.484755   73776 exec_runner.go:144] found /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem, removing ...
	I0610 12:14:01.484766   73776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem
	I0610 12:14:01.484803   73776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19046-3880/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19046-3880/.minikube/cert.pem (1123 bytes)
	I0610 12:14:01.484874   73776 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca-key.pem org=jenkins.bridge-491653 san=[127.0.0.1 192.168.39.171 bridge-491653 localhost minikube]
	I0610 12:14:01.635008   73776 provision.go:177] copyRemoteCerts
	I0610 12:14:01.635059   73776 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 12:14:01.635081   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHHostname
	I0610 12:14:01.637762   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.638132   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:01.638163   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.638335   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHPort
	I0610 12:14:01.638534   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:01.638685   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHUsername
	I0610 12:14:01.638840   73776 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/id_rsa Username:docker}
	I0610 12:14:01.723196   73776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0610 12:14:01.747338   73776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0610 12:14:01.771112   73776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 12:14:01.795758   73776 provision.go:87] duration metric: took 317.935303ms to configureAuth
	I0610 12:14:01.795790   73776 buildroot.go:189] setting minikube options for container-runtime
	I0610 12:14:01.795938   73776 config.go:182] Loaded profile config "bridge-491653": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 12:14:01.796017   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHHostname
	I0610 12:14:01.798856   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.799286   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:01.799310   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:01.799462   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHPort
	I0610 12:14:01.799654   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:01.799804   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:01.799925   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHUsername
	I0610 12:14:01.800076   73776 main.go:141] libmachine: Using SSH client type: native
	I0610 12:14:01.800231   73776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0610 12:14:01.800245   73776 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0610 12:14:02.068768   73776 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0610 12:14:02.068798   73776 main.go:141] libmachine: Checking connection to Docker...
	I0610 12:14:02.068808   73776 main.go:141] libmachine: (bridge-491653) Calling .GetURL
	I0610 12:14:02.070160   73776 main.go:141] libmachine: (bridge-491653) DBG | Using libvirt version 6000000
	I0610 12:14:02.072580   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.072988   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:02.073011   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.073205   73776 main.go:141] libmachine: Docker is up and running!
	I0610 12:14:02.073229   73776 main.go:141] libmachine: Reticulating splines...
	I0610 12:14:02.073238   73776 client.go:171] duration metric: took 24.974811126s to LocalClient.Create
	I0610 12:14:02.073261   73776 start.go:167] duration metric: took 24.974878581s to libmachine.API.Create "bridge-491653"
	I0610 12:14:02.073270   73776 start.go:293] postStartSetup for "bridge-491653" (driver="kvm2")
	I0610 12:14:02.073282   73776 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 12:14:02.073314   73776 main.go:141] libmachine: (bridge-491653) Calling .DriverName
	I0610 12:14:02.073553   73776 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 12:14:02.073578   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHHostname
	I0610 12:14:02.076000   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.076451   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:02.076481   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.076658   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHPort
	I0610 12:14:02.076843   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:02.077038   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHUsername
	I0610 12:14:02.077185   73776 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/id_rsa Username:docker}
	I0610 12:14:02.159637   73776 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 12:14:02.164156   73776 info.go:137] Remote host: Buildroot 2023.02.9
	I0610 12:14:02.164178   73776 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/addons for local assets ...
	I0610 12:14:02.164232   73776 filesync.go:126] Scanning /home/jenkins/minikube-integration/19046-3880/.minikube/files for local assets ...
	I0610 12:14:02.164299   73776 filesync.go:149] local asset: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem -> 107582.pem in /etc/ssl/certs
	I0610 12:14:02.164386   73776 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 12:14:02.175211   73776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/ssl/certs/107582.pem --> /etc/ssl/certs/107582.pem (1708 bytes)
	I0610 12:14:02.199988   73776 start.go:296] duration metric: took 126.703485ms for postStartSetup
	I0610 12:14:02.200040   73776 main.go:141] libmachine: (bridge-491653) Calling .GetConfigRaw
	I0610 12:14:02.200690   73776 main.go:141] libmachine: (bridge-491653) Calling .GetIP
	I0610 12:14:02.203121   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.203496   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:02.203529   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.203744   73776 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/bridge-491653/config.json ...
	I0610 12:14:02.203938   73776 start.go:128] duration metric: took 25.126653197s to createHost
	I0610 12:14:02.203958   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHHostname
	I0610 12:14:02.206543   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.206878   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:02.206938   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.207101   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHPort
	I0610 12:14:02.207300   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:02.207472   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:02.207611   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHUsername
	I0610 12:14:02.207778   73776 main.go:141] libmachine: Using SSH client type: native
	I0610 12:14:02.207947   73776 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d880] 0x8305e0 <nil>  [] 0s} 192.168.39.171 22 <nil> <nil>}
	I0610 12:14:02.207958   73776 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 12:14:02.317436   73776 main.go:141] libmachine: SSH cmd err, output: <nil>: 1718021642.295775465
	
	I0610 12:14:02.317462   73776 fix.go:216] guest clock: 1718021642.295775465
	I0610 12:14:02.317475   73776 fix.go:229] Guest: 2024-06-10 12:14:02.295775465 +0000 UTC Remote: 2024-06-10 12:14:02.203948959 +0000 UTC m=+25.253044465 (delta=91.826506ms)
	I0610 12:14:02.317498   73776 fix.go:200] guest clock delta is within tolerance: 91.826506ms
	I0610 12:14:02.317505   73776 start.go:83] releasing machines lock for "bridge-491653", held for 25.240347325s
	I0610 12:14:02.317526   73776 main.go:141] libmachine: (bridge-491653) Calling .DriverName
	I0610 12:14:02.317825   73776 main.go:141] libmachine: (bridge-491653) Calling .GetIP
	I0610 12:14:02.321117   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.321454   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:02.321489   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.321692   73776 main.go:141] libmachine: (bridge-491653) Calling .DriverName
	I0610 12:14:02.322155   73776 main.go:141] libmachine: (bridge-491653) Calling .DriverName
	I0610 12:14:02.322327   73776 main.go:141] libmachine: (bridge-491653) Calling .DriverName
	I0610 12:14:02.322404   73776 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 12:14:02.322444   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHHostname
	I0610 12:14:02.322522   73776 ssh_runner.go:195] Run: cat /version.json
	I0610 12:14:02.322539   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHHostname
	I0610 12:14:02.325079   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.325374   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:02.325398   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.325416   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.325582   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHPort
	I0610 12:14:02.325750   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:02.325855   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:02.325876   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:02.325935   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHUsername
	I0610 12:14:02.326110   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHPort
	I0610 12:14:02.326125   73776 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/id_rsa Username:docker}
	I0610 12:14:02.326239   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHKeyPath
	I0610 12:14:02.326357   73776 main.go:141] libmachine: (bridge-491653) Calling .GetSSHUsername
	I0610 12:14:02.326491   73776 sshutil.go:53] new ssh client: &{IP:192.168.39.171 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/bridge-491653/id_rsa Username:docker}
	I0610 12:14:02.410082   73776 ssh_runner.go:195] Run: systemctl --version
	I0610 12:14:02.443964   73776 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0610 12:14:02.604341   73776 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 12:14:02.610116   73776 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 12:14:02.610198   73776 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 12:14:02.627282   73776 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 12:14:02.627305   73776 start.go:494] detecting cgroup driver to use...
	I0610 12:14:02.627370   73776 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 12:14:02.646173   73776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 12:14:02.660181   73776 docker.go:217] disabling cri-docker service (if available) ...
	I0610 12:14:02.660236   73776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0610 12:14:02.674624   73776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0610 12:14:02.690837   73776 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0610 12:14:02.847855   73776 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0610 12:14:03.002441   73776 docker.go:233] disabling docker service ...
	I0610 12:14:03.002511   73776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0610 12:14:03.019257   73776 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0610 12:14:03.035439   73776 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0610 12:14:03.201473   73776 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0610 12:14:03.342934   73776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0610 12:14:03.358599   73776 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 12:14:03.381564   73776 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0610 12:14:03.381653   73776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 12:14:03.394126   73776 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0610 12:14:03.394220   73776 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 12:14:03.405963   73776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 12:14:03.418165   73776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 12:14:03.430297   73776 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 12:14:03.441411   73776 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 12:14:03.453573   73776 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 12:14:03.474893   73776 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0610 12:14:03.486351   73776 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 12:14:03.496883   73776 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0610 12:14:03.496961   73776 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0610 12:14:03.512575   73776 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 12:14:03.527463   73776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 12:14:03.664703   73776 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0610 12:14:03.839015   73776 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0610 12:14:03.839110   73776 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0610 12:14:03.843831   73776 start.go:562] Will wait 60s for crictl version
	I0610 12:14:03.843895   73776 ssh_runner.go:195] Run: which crictl
	I0610 12:14:03.847627   73776 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 12:14:03.890418   73776 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0610 12:14:03.890509   73776 ssh_runner.go:195] Run: crio --version
	I0610 12:14:03.928938   73776 ssh_runner.go:195] Run: crio --version
	I0610 12:14:03.968180   73776 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0610 12:14:02.069432   71982 node_ready.go:53] node "flannel-491653" has status "Ready":"False"
	I0610 12:14:04.070227   71982 node_ready.go:53] node "flannel-491653" has status "Ready":"False"
	I0610 12:14:05.568984   71982 node_ready.go:49] node "flannel-491653" has status "Ready":"True"
	I0610 12:14:05.569010   71982 node_ready.go:38] duration metric: took 8.003659651s for node "flannel-491653" to be "Ready" ...
	I0610 12:14:05.569020   71982 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 12:14:05.583733   71982 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-w46sl" in "kube-system" namespace to be "Ready" ...
	I0610 12:14:03.969607   73776 main.go:141] libmachine: (bridge-491653) Calling .GetIP
	I0610 12:14:03.972436   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:03.972919   73776 main.go:141] libmachine: (bridge-491653) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:2e:76", ip: ""} in network mk-bridge-491653: {Iface:virbr2 ExpiryTime:2024-06-10 13:13:51 +0000 UTC Type:0 Mac:52:54:00:63:2e:76 Iaid: IPaddr:192.168.39.171 Prefix:24 Hostname:bridge-491653 Clientid:01:52:54:00:63:2e:76}
	I0610 12:14:03.972958   73776 main.go:141] libmachine: (bridge-491653) DBG | domain bridge-491653 has defined IP address 192.168.39.171 and MAC address 52:54:00:63:2e:76 in network mk-bridge-491653
	I0610 12:14:03.973254   73776 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0610 12:14:03.978722   73776 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 12:14:03.991476   73776 kubeadm.go:877] updating cluster {Name:bridge-491653 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:bridge-491653 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.39.171 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0610 12:14:03.991616   73776 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 12:14:03.991679   73776 ssh_runner.go:195] Run: sudo crictl images --output json
	I0610 12:14:04.028736   73776 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0610 12:14:04.028826   73776 ssh_runner.go:195] Run: which lz4
	I0610 12:14:04.032981   73776 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 12:14:04.037006   73776 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 12:14:04.037048   73776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0610 12:14:05.405397   73776 crio.go:462] duration metric: took 1.372478405s to copy over tarball
	I0610 12:14:05.405480   73776 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.415355168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f1d455f-489c-4f91-9da9-c91d0a3dc5ae name=/runtime.v1.RuntimeService/Version
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.416891200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d4a8391-58f3-4801-99c2-ca315af98af7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.417523796Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021649417429950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d4a8391-58f3-4801-99c2-ca315af98af7 name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.418027386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8263ac1b-c546-4c07-8dcf-5b2d6e7e0ce4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.418092425Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8263ac1b-c546-4c07-8dcf-5b2d6e7e0ce4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.418276764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e665a2fb5aecc808097f2fc05d79904e306ff78e8236dae6c9f7e09bce5e7d10,PodSandboxId:a9d3e9e4ec0e2b59767845bed3dd6c145cd768d55411c2f28d5bf26e499a28db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020938827005760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df0a38c-5e91-4b10-a303-c4eff9545669,},Annotations:map[string]string{io.kubernetes.container.hash: f3f5f7e9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba3974695bcb24d4b2cc8663b2aa027f6b410c22fea995bdcb40dfbd617433,PodSandboxId:9bb6ddaadc05193b6f50efff54d843ef10a59c3c2beed571999521b753dc71f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020938113799597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh756,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cbf3d6-c149-4ae1-84d3-6df6a53ea091,},Annotations:map[string]string{io.kubernetes.container.hash: 17aa3131,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce134635118f7b2df18802cbc00fa342ccd3073a3443738aa4756dca35584e82,PodSandboxId:231539d0028b33c319a6b6db3544bbbea03be1eba9e25caf0c3a64056d67f4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937722768597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fgtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d948ca-122a-4042-8371-8a9422c187bc,},Annotations:map[string]string{io.kubernetes.container.hash: e063c420,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede65a395c6808abbdc027050debd911c62f6c6caf8a06f602eede88005380d3,PodSandboxId:bf214ddcb42cc130013624d4d24f34997d3174e052fe2e1d685309419830855b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937588701477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fg8xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e91ae09c-8821-4843-8c0d-
ea734433c213,},Annotations:map[string]string{io.kubernetes.container.hash: 6835c88c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0d5ff9212eb4d5532fe9dc9affa7331ae4ff1f5f5eb3a2e8e42b0133c616a70,PodSandboxId:4215d285111a70838d992640373dfb8d016f1e9d2bd7192ab9046d8b56fca700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:171802091
7787872621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d424dbcac48429c7d039d6107e300dc3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746761e0904148694c14f03f97d46a2d2a04dd5aa50fc3f71fc632a115b40a21,PodSandboxId:b1d8bca51772f4492d8104060796b139c6eb38d6620714327699e98031b691fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171
8020917774776034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c88a335fe375918bcfd46be4831435f7,},Annotations:map[string]string{io.kubernetes.container.hash: e653e9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e3a8adfbcb60a4cf30c281f0c60f9d7c3bff06b1cf111b2cc27d0692eebf5,PodSandboxId:8406c2ed5bf34cde9ed1c5ec05ae7753f39aefdede064fff143e68299e93dada,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020917740984045,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17235c9a9d5b1f2ccf38065ada94e3,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d832964f75572ba827c846938c023588ee720568af6f4209d8669bbbf714be81,PodSandboxId:1c90a6e342a603712e161be1f0f35d7f9b90848253ff2c30f0a613ddb819e8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020917693294419,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afe3822d5bbfe48baace364462a72d7,},Annotations:map[string]string{io.kubernetes.container.hash: ebaede52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8263ac1b-c546-4c07-8dcf-5b2d6e7e0ce4 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.467110901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=570f6099-4594-4543-8c47-0037c0f7973a name=/runtime.v1.RuntimeService/Version
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.467183959Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=570f6099-4594-4543-8c47-0037c0f7973a name=/runtime.v1.RuntimeService/Version
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.468830718Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8926a1b-4d6e-40b4-8161-e335522ab42c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.469447348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021649469411209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8926a1b-4d6e-40b4-8161-e335522ab42c name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.470149894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d47818e-6753-4f66-b16f-1d53214be744 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.470243642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d47818e-6753-4f66-b16f-1d53214be744 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.470589511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e665a2fb5aecc808097f2fc05d79904e306ff78e8236dae6c9f7e09bce5e7d10,PodSandboxId:a9d3e9e4ec0e2b59767845bed3dd6c145cd768d55411c2f28d5bf26e499a28db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020938827005760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df0a38c-5e91-4b10-a303-c4eff9545669,},Annotations:map[string]string{io.kubernetes.container.hash: f3f5f7e9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba3974695bcb24d4b2cc8663b2aa027f6b410c22fea995bdcb40dfbd617433,PodSandboxId:9bb6ddaadc05193b6f50efff54d843ef10a59c3c2beed571999521b753dc71f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020938113799597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh756,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cbf3d6-c149-4ae1-84d3-6df6a53ea091,},Annotations:map[string]string{io.kubernetes.container.hash: 17aa3131,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce134635118f7b2df18802cbc00fa342ccd3073a3443738aa4756dca35584e82,PodSandboxId:231539d0028b33c319a6b6db3544bbbea03be1eba9e25caf0c3a64056d67f4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937722768597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fgtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d948ca-122a-4042-8371-8a9422c187bc,},Annotations:map[string]string{io.kubernetes.container.hash: e063c420,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede65a395c6808abbdc027050debd911c62f6c6caf8a06f602eede88005380d3,PodSandboxId:bf214ddcb42cc130013624d4d24f34997d3174e052fe2e1d685309419830855b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937588701477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fg8xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e91ae09c-8821-4843-8c0d-
ea734433c213,},Annotations:map[string]string{io.kubernetes.container.hash: 6835c88c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0d5ff9212eb4d5532fe9dc9affa7331ae4ff1f5f5eb3a2e8e42b0133c616a70,PodSandboxId:4215d285111a70838d992640373dfb8d016f1e9d2bd7192ab9046d8b56fca700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:171802091
7787872621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d424dbcac48429c7d039d6107e300dc3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746761e0904148694c14f03f97d46a2d2a04dd5aa50fc3f71fc632a115b40a21,PodSandboxId:b1d8bca51772f4492d8104060796b139c6eb38d6620714327699e98031b691fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171
8020917774776034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c88a335fe375918bcfd46be4831435f7,},Annotations:map[string]string{io.kubernetes.container.hash: e653e9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e3a8adfbcb60a4cf30c281f0c60f9d7c3bff06b1cf111b2cc27d0692eebf5,PodSandboxId:8406c2ed5bf34cde9ed1c5ec05ae7753f39aefdede064fff143e68299e93dada,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020917740984045,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17235c9a9d5b1f2ccf38065ada94e3,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d832964f75572ba827c846938c023588ee720568af6f4209d8669bbbf714be81,PodSandboxId:1c90a6e342a603712e161be1f0f35d7f9b90848253ff2c30f0a613ddb819e8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020917693294419,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afe3822d5bbfe48baace364462a72d7,},Annotations:map[string]string{io.kubernetes.container.hash: ebaede52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d47818e-6753-4f66-b16f-1d53214be744 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.510954063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ab7ae92-49b9-461f-b311-f37d508e1bb5 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.511028084Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ab7ae92-49b9-461f-b311-f37d508e1bb5 name=/runtime.v1.RuntimeService/Version
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.512716532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af133448-7c96-4bb4-9f7a-a007248406de name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.513096079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1718021649513074127,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133260,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af133448-7c96-4bb4-9f7a-a007248406de name=/runtime.v1.ImageService/ImageFsInfo
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.513775317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e20367a-1f26-4dec-b05a-03e7758fb891 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.513860256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e20367a-1f26-4dec-b05a-03e7758fb891 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.514050210Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e665a2fb5aecc808097f2fc05d79904e306ff78e8236dae6c9f7e09bce5e7d10,PodSandboxId:a9d3e9e4ec0e2b59767845bed3dd6c145cd768d55411c2f28d5bf26e499a28db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020938827005760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df0a38c-5e91-4b10-a303-c4eff9545669,},Annotations:map[string]string{io.kubernetes.container.hash: f3f5f7e9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba3974695bcb24d4b2cc8663b2aa027f6b410c22fea995bdcb40dfbd617433,PodSandboxId:9bb6ddaadc05193b6f50efff54d843ef10a59c3c2beed571999521b753dc71f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020938113799597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh756,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cbf3d6-c149-4ae1-84d3-6df6a53ea091,},Annotations:map[string]string{io.kubernetes.container.hash: 17aa3131,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce134635118f7b2df18802cbc00fa342ccd3073a3443738aa4756dca35584e82,PodSandboxId:231539d0028b33c319a6b6db3544bbbea03be1eba9e25caf0c3a64056d67f4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937722768597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fgtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d948ca-122a-4042-8371-8a9422c187bc,},Annotations:map[string]string{io.kubernetes.container.hash: e063c420,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede65a395c6808abbdc027050debd911c62f6c6caf8a06f602eede88005380d3,PodSandboxId:bf214ddcb42cc130013624d4d24f34997d3174e052fe2e1d685309419830855b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937588701477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fg8xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e91ae09c-8821-4843-8c0d-
ea734433c213,},Annotations:map[string]string{io.kubernetes.container.hash: 6835c88c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0d5ff9212eb4d5532fe9dc9affa7331ae4ff1f5f5eb3a2e8e42b0133c616a70,PodSandboxId:4215d285111a70838d992640373dfb8d016f1e9d2bd7192ab9046d8b56fca700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:171802091
7787872621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d424dbcac48429c7d039d6107e300dc3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746761e0904148694c14f03f97d46a2d2a04dd5aa50fc3f71fc632a115b40a21,PodSandboxId:b1d8bca51772f4492d8104060796b139c6eb38d6620714327699e98031b691fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171
8020917774776034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c88a335fe375918bcfd46be4831435f7,},Annotations:map[string]string{io.kubernetes.container.hash: e653e9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e3a8adfbcb60a4cf30c281f0c60f9d7c3bff06b1cf111b2cc27d0692eebf5,PodSandboxId:8406c2ed5bf34cde9ed1c5ec05ae7753f39aefdede064fff143e68299e93dada,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020917740984045,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17235c9a9d5b1f2ccf38065ada94e3,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d832964f75572ba827c846938c023588ee720568af6f4209d8669bbbf714be81,PodSandboxId:1c90a6e342a603712e161be1f0f35d7f9b90848253ff2c30f0a613ddb819e8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020917693294419,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afe3822d5bbfe48baace364462a72d7,},Annotations:map[string]string{io.kubernetes.container.hash: ebaede52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e20367a-1f26-4dec-b05a-03e7758fb891 name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.614375692Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6cecace2-76b2-44ea-a22f-95b60e6f6b4c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.614806588Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8f274626a4cd1d748e15f01ed1015a69d5695d68691f563efaed1e4a249de1fb,Metadata:&PodSandboxMetadata{Name:metrics-server-569cc877fc-j58s9,Uid:f1c91612-b967-447e-bc71-13ba0d11864b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718020938853624412,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-569cc877fc-j58s9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f1c91612-b967-447e-bc71-13ba0d11864b,k8s-app: metrics-server,pod-template-hash: 569cc877fc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T12:02:18.544437853Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a9d3e9e4ec0e2b59767845bed3dd6c145cd768d55411c2f28d5bf26e499a28db,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8df0a38c-5e91-4b10-a303-c4ef
f9545669,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718020938736826959,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df0a38c-5e91-4b10-a303-c4eff9545669,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-06-10T12:02:18.419324496Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9bb6ddaadc05193b6f50efff54d843ef10a59c3c2beed571999521b753dc71f5,Metadata:&PodSandboxMetadata{Name:kube-proxy-wh756,Uid:57cbf3d6-c149-4ae1-84d3-6df6a53ea091,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718020937773897184,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wh756,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cbf3d6-c149-4ae1-84d3-6df6a53ea091,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T12:02:15.660301388Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:231539d0028b33c319a6b6db3544bbbea03be1eba9e25caf0c3a64056d67f4ab,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4
d-5fgtk,Uid:03d948ca-122a-4042-8371-8a9422c187bc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718020937082409368,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fgtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d948ca-122a-4042-8371-8a9422c187bc,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T12:02:16.770549650Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bf214ddcb42cc130013624d4d24f34997d3174e052fe2e1d685309419830855b,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-fg8xx,Uid:e91ae09c-8821-4843-8c0d-ea734433c213,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718020937052264366,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-fg8xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e91ae09c-8821-4843-8c0d-ea734433c213,k8s-app: kube-dns,
pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-06-10T12:02:16.741570376Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b1d8bca51772f4492d8104060796b139c6eb38d6620714327699e98031b691fa,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-281114,Uid:c88a335fe375918bcfd46be4831435f7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718020917545413357,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c88a335fe375918bcfd46be4831435f7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.222:2379,kubernetes.io/config.hash: c88a335fe375918bcfd46be4831435f7,kubernetes.io/config.seen: 2024-06-10T12:01:57.100025055Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4215d285111a70838d992640373dfb8d016
f1e9d2bd7192ab9046d8b56fca700,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-281114,Uid:d424dbcac48429c7d039d6107e300dc3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718020917541361553,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d424dbcac48429c7d039d6107e300dc3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d424dbcac48429c7d039d6107e300dc3,kubernetes.io/config.seen: 2024-06-10T12:01:57.100018517Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1c90a6e342a603712e161be1f0f35d7f9b90848253ff2c30f0a613ddb819e8f8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-281114,Uid:3afe3822d5bbfe48baace364462a72d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718020917533301467,Labels:map[stri
ng]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afe3822d5bbfe48baace364462a72d7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.222:8444,kubernetes.io/config.hash: 3afe3822d5bbfe48baace364462a72d7,kubernetes.io/config.seen: 2024-06-10T12:01:57.100026211Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8406c2ed5bf34cde9ed1c5ec05ae7753f39aefdede064fff143e68299e93dada,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-281114,Uid:3f17235c9a9d5b1f2ccf38065ada94e3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1718020917527990351,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 3f17235c9a9d5b1f2ccf38065ada94e3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3f17235c9a9d5b1f2ccf38065ada94e3,kubernetes.io/config.seen: 2024-06-10T12:01:57.100023665Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6cecace2-76b2-44ea-a22f-95b60e6f6b4c name=/runtime.v1.RuntimeService/ListPodSandbox
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.615882765Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fc350300-fb42-4461-90d5-8ceb021662ee name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.615961378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fc350300-fb42-4461-90d5-8ceb021662ee name=/runtime.v1.RuntimeService/ListContainers
	Jun 10 12:14:09 default-k8s-diff-port-281114 crio[726]: time="2024-06-10 12:14:09.616309996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e665a2fb5aecc808097f2fc05d79904e306ff78e8236dae6c9f7e09bce5e7d10,PodSandboxId:a9d3e9e4ec0e2b59767845bed3dd6c145cd768d55411c2f28d5bf26e499a28db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1718020938827005760,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df0a38c-5e91-4b10-a303-c4eff9545669,},Annotations:map[string]string{io.kubernetes.container.hash: f3f5f7e9,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26ba3974695bcb24d4b2cc8663b2aa027f6b410c22fea995bdcb40dfbd617433,PodSandboxId:9bb6ddaadc05193b6f50efff54d843ef10a59c3c2beed571999521b753dc71f5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1718020938113799597,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wh756,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cbf3d6-c149-4ae1-84d3-6df6a53ea091,},Annotations:map[string]string{io.kubernetes.container.hash: 17aa3131,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce134635118f7b2df18802cbc00fa342ccd3073a3443738aa4756dca35584e82,PodSandboxId:231539d0028b33c319a6b6db3544bbbea03be1eba9e25caf0c3a64056d67f4ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937722768597,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-5fgtk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03d948ca-122a-4042-8371-8a9422c187bc,},Annotations:map[string]string{io.kubernetes.container.hash: e063c420,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede65a395c6808abbdc027050debd911c62f6c6caf8a06f602eede88005380d3,PodSandboxId:bf214ddcb42cc130013624d4d24f34997d3174e052fe2e1d685309419830855b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1718020937588701477,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-fg8xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e91ae09c-8821-4843-8c0d-
ea734433c213,},Annotations:map[string]string{io.kubernetes.container.hash: 6835c88c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0d5ff9212eb4d5532fe9dc9affa7331ae4ff1f5f5eb3a2e8e42b0133c616a70,PodSandboxId:4215d285111a70838d992640373dfb8d016f1e9d2bd7192ab9046d8b56fca700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:171802091
7787872621,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d424dbcac48429c7d039d6107e300dc3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:746761e0904148694c14f03f97d46a2d2a04dd5aa50fc3f71fc632a115b40a21,PodSandboxId:b1d8bca51772f4492d8104060796b139c6eb38d6620714327699e98031b691fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:171
8020917774776034,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c88a335fe375918bcfd46be4831435f7,},Annotations:map[string]string{io.kubernetes.container.hash: e653e9ea,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:622e3a8adfbcb60a4cf30c281f0c60f9d7c3bff06b1cf111b2cc27d0692eebf5,PodSandboxId:8406c2ed5bf34cde9ed1c5ec05ae7753f39aefdede064fff143e68299e93dada,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1718020917740984045,Label
s:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f17235c9a9d5b1f2ccf38065ada94e3,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d832964f75572ba827c846938c023588ee720568af6f4209d8669bbbf714be81,PodSandboxId:1c90a6e342a603712e161be1f0f35d7f9b90848253ff2c30f0a613ddb819e8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1718020917693294419,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-281114,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3afe3822d5bbfe48baace364462a72d7,},Annotations:map[string]string{io.kubernetes.container.hash: ebaede52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fc350300-fb42-4461-90d5-8ceb021662ee name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e665a2fb5aecc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 minutes ago      Running             storage-provisioner       0                   a9d3e9e4ec0e2       storage-provisioner
	26ba3974695bc       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   11 minutes ago      Running             kube-proxy                0                   9bb6ddaadc051       kube-proxy-wh756
	ce134635118f7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   11 minutes ago      Running             coredns                   0                   231539d0028b3       coredns-7db6d8ff4d-5fgtk
	ede65a395c680       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   11 minutes ago      Running             coredns                   0                   bf214ddcb42cc       coredns-7db6d8ff4d-fg8xx
	c0d5ff9212eb4       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   12 minutes ago      Running             kube-controller-manager   2                   4215d285111a7       kube-controller-manager-default-k8s-diff-port-281114
	746761e090414       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   12 minutes ago      Running             etcd                      2                   b1d8bca51772f       etcd-default-k8s-diff-port-281114
	622e3a8adfbcb       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   12 minutes ago      Running             kube-scheduler            2                   8406c2ed5bf34       kube-scheduler-default-k8s-diff-port-281114
	d832964f75572       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   12 minutes ago      Running             kube-apiserver            2                   1c90a6e342a60       kube-apiserver-default-k8s-diff-port-281114
	
	
	==> coredns [ce134635118f7b2df18802cbc00fa342ccd3073a3443738aa4756dca35584e82] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ede65a395c6808abbdc027050debd911c62f6c6caf8a06f602eede88005380d3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-281114
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-281114
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b8e7e33180e1f47cc83cca2e1a263af6c57df959
	                    minikube.k8s.io/name=default-k8s-diff-port-281114
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_06_10T12_02_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Jun 2024 12:02:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-281114
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Jun 2024 12:14:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Jun 2024 12:12:35 +0000   Mon, 10 Jun 2024 12:01:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Jun 2024 12:12:35 +0000   Mon, 10 Jun 2024 12:01:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Jun 2024 12:12:35 +0000   Mon, 10 Jun 2024 12:01:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Jun 2024 12:12:35 +0000   Mon, 10 Jun 2024 12:02:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.222
	  Hostname:    default-k8s-diff-port-281114
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fe69d065bac483cbbf95ca19ccd8066
	  System UUID:                7fe69d06-5bac-483c-bbf9-5ca19ccd8066
	  Boot ID:                    6463d4c2-e1b4-4d25-8caf-0d032d5e18c0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-5fgtk                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 coredns-7db6d8ff4d-fg8xx                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 etcd-default-k8s-diff-port-281114                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kube-apiserver-default-k8s-diff-port-281114             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-281114    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-wh756                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-default-k8s-diff-port-281114             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 metrics-server-569cc877fc-j58s9                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node default-k8s-diff-port-281114 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node default-k8s-diff-port-281114 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node default-k8s-diff-port-281114 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m   node-controller  Node default-k8s-diff-port-281114 event: Registered Node default-k8s-diff-port-281114 in Controller
	
	
	==> dmesg <==
	[  +0.040033] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.610449] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.841111] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.542956] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.780331] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.061871] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061073] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.169738] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.146134] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.287034] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[Jun10 11:57] systemd-fstab-generator[805]: Ignoring "noauto" option for root device
	[  +1.939897] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +0.070518] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.515579] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.048719] kauditd_printk_skb: 50 callbacks suppressed
	[  +7.141243] kauditd_printk_skb: 27 callbacks suppressed
	[Jun10 12:01] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.844952] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[Jun10 12:02] kauditd_printk_skb: 57 callbacks suppressed
	[  +1.589189] systemd-fstab-generator[3894]: Ignoring "noauto" option for root device
	[ +14.360359] systemd-fstab-generator[4106]: Ignoring "noauto" option for root device
	[  +0.035003] kauditd_printk_skb: 14 callbacks suppressed
	[Jun10 12:03] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [746761e0904148694c14f03f97d46a2d2a04dd5aa50fc3f71fc632a115b40a21] <==
	{"level":"info","ts":"2024-06-10T12:10:40.671944Z","caller":"traceutil/trace.go:171","msg":"trace[124069812] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:857; }","duration":"110.296027ms","start":"2024-06-10T12:10:40.561635Z","end":"2024-06-10T12:10:40.671931Z","steps":["trace[124069812] 'agreement among raft nodes before linearized reading'  (duration: 110.196759ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:10:40.671776Z","caller":"traceutil/trace.go:171","msg":"trace[1968203184] linearizableReadLoop","detail":"{readStateIndex:971; appliedIndex:970; }","duration":"109.708919ms","start":"2024-06-10T12:10:40.561639Z","end":"2024-06-10T12:10:40.671348Z","steps":["trace[1968203184] 'read index received'  (duration: 73.653µs)","trace[1968203184] 'applied index is now lower than readState.Index'  (duration: 109.633603ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T12:10:41.314818Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.194237ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T12:10:41.314972Z","caller":"traceutil/trace.go:171","msg":"trace[1563114442] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:857; }","duration":"136.428358ms","start":"2024-06-10T12:10:41.178527Z","end":"2024-06-10T12:10:41.314956Z","steps":["trace[1563114442] 'range keys from in-memory index tree'  (duration: 136.129657ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:11:27.108177Z","caller":"traceutil/trace.go:171","msg":"trace[433245760] transaction","detail":"{read_only:false; response_revision:894; number_of_response:1; }","duration":"178.085037ms","start":"2024-06-10T12:11:26.930069Z","end":"2024-06-10T12:11:27.108154Z","steps":["trace[433245760] 'process raft request'  (duration: 177.901021ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:11:27.519699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.66108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T12:11:27.519912Z","caller":"traceutil/trace.go:171","msg":"trace[2021909351] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:894; }","duration":"173.919387ms","start":"2024-06-10T12:11:27.345972Z","end":"2024-06-10T12:11:27.519891Z","steps":["trace[2021909351] 'range keys from in-memory index tree'  (duration: 173.595186ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:11:58.944782Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2024-06-10T12:11:58.957863Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":677,"took":"12.733606ms","hash":2635918690,"current-db-size-bytes":2154496,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2154496,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-06-10T12:11:58.957942Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2635918690,"revision":677,"compact-revision":-1}
	{"level":"warn","ts":"2024-06-10T12:12:02.489612Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.075554ms","expected-duration":"100ms","prefix":"","request":"header:<ID:720734278757650239 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.222\" mod_revision:914 > success:<request_put:<key:\"/registry/masterleases/192.168.50.222\" value_size:67 lease:720734278757650237 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.222\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-06-10T12:12:02.489754Z","caller":"traceutil/trace.go:171","msg":"trace[985553837] linearizableReadLoop","detail":"{readStateIndex:1055; appliedIndex:1054; }","duration":"147.056877ms","start":"2024-06-10T12:12:02.342671Z","end":"2024-06-10T12:12:02.489727Z","steps":["trace[985553837] 'read index received'  (duration: 62.959µs)","trace[985553837] 'applied index is now lower than readState.Index'  (duration: 146.992655ms)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T12:12:02.489885Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.227995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T12:12:02.489933Z","caller":"traceutil/trace.go:171","msg":"trace[1939561443] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:923; }","duration":"147.31206ms","start":"2024-06-10T12:12:02.342614Z","end":"2024-06-10T12:12:02.489926Z","steps":["trace[1939561443] 'agreement among raft nodes before linearized reading'  (duration: 147.173667ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:12:02.489884Z","caller":"traceutil/trace.go:171","msg":"trace[1958847794] transaction","detail":"{read_only:false; response_revision:923; number_of_response:1; }","duration":"225.277992ms","start":"2024-06-10T12:12:02.264582Z","end":"2024-06-10T12:12:02.48986Z","steps":["trace[1958847794] 'process raft request'  (duration: 61.73817ms)","trace[1958847794] 'compare'  (duration: 162.959965ms)"],"step_count":2}
	{"level":"info","ts":"2024-06-10T12:12:25.843003Z","caller":"traceutil/trace.go:171","msg":"trace[130043406] transaction","detail":"{read_only:false; response_revision:942; number_of_response:1; }","duration":"196.840521ms","start":"2024-06-10T12:12:25.646141Z","end":"2024-06-10T12:12:25.842982Z","steps":["trace[130043406] 'process raft request'  (duration: 196.289841ms)"],"step_count":1}
	{"level":"info","ts":"2024-06-10T12:13:32.69492Z","caller":"traceutil/trace.go:171","msg":"trace[1594611449] linearizableReadLoop","detail":"{readStateIndex:1150; appliedIndex:1149; }","duration":"445.483348ms","start":"2024-06-10T12:13:32.2494Z","end":"2024-06-10T12:13:32.694883Z","steps":["trace[1594611449] 'read index received'  (duration: 445.250371ms)","trace[1594611449] 'applied index is now lower than readState.Index'  (duration: 231.913µs)"],"step_count":2}
	{"level":"warn","ts":"2024-06-10T12:13:32.695231Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"445.760366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:422"}
	{"level":"info","ts":"2024-06-10T12:13:32.695272Z","caller":"traceutil/trace.go:171","msg":"trace[76481845] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:999; }","duration":"445.886355ms","start":"2024-06-10T12:13:32.249378Z","end":"2024-06-10T12:13:32.695265Z","steps":["trace[76481845] 'agreement among raft nodes before linearized reading'  (duration: 445.692533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:13:32.695305Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T12:13:32.249334Z","time spent":"445.959195ms","remote":"127.0.0.1:44696","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":1,"response size":445,"request content":"key:\"/registry/services/endpoints/default/kubernetes\" "}
	{"level":"warn","ts":"2024-06-10T12:13:32.695526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"352.327149ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-06-10T12:13:32.695561Z","caller":"traceutil/trace.go:171","msg":"trace[990430102] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:999; }","duration":"352.38347ms","start":"2024-06-10T12:13:32.34317Z","end":"2024-06-10T12:13:32.695553Z","steps":["trace[990430102] 'agreement among raft nodes before linearized reading'  (duration: 352.332504ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:13:32.695588Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T12:13:32.343157Z","time spent":"352.426717ms","remote":"127.0.0.1:44526","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-06-10T12:13:32.69501Z","caller":"traceutil/trace.go:171","msg":"trace[1892796327] transaction","detail":"{read_only:false; response_revision:999; number_of_response:1; }","duration":"463.949227ms","start":"2024-06-10T12:13:32.231047Z","end":"2024-06-10T12:13:32.694996Z","steps":["trace[1892796327] 'process raft request'  (duration: 463.667376ms)"],"step_count":1}
	{"level":"warn","ts":"2024-06-10T12:13:32.696576Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-10T12:13:32.231026Z","time spent":"465.419365ms","remote":"127.0.0.1:44696","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:997 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 12:14:10 up 17 min,  0 users,  load average: 0.18, 0.19, 0.18
	Linux default-k8s-diff-port-281114 5.10.207 #1 SMP Thu Jun 6 14:49:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d832964f75572ba827c846938c023588ee720568af6f4209d8669bbbf714be81] <==
	I0610 12:08:01.439280       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:10:01.438622       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:10:01.438886       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:10:01.438919       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:10:01.440154       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:10:01.440260       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:10:01.440294       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:12:00.442240       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:12:00.442408       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0610 12:12:01.443267       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:12:01.443384       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:12:01.443440       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:12:01.443321       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:12:01.443682       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:12:01.444561       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:13:01.444416       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:13:01.444531       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0610 12:13:01.444541       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0610 12:13:01.444751       1 handler_proxy.go:93] no RequestInfo found in the context
	E0610 12:13:01.444881       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0610 12:13:01.446696       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c0d5ff9212eb4d5532fe9dc9affa7331ae4ff1f5f5eb3a2e8e42b0133c616a70] <==
	I0610 12:08:26.019428       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="202.193µs"
	E0610 12:08:45.902550       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:08:46.378406       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:09:15.907657       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:09:16.386109       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:09:45.913432       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:09:46.393373       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:10:15.921540       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:10:16.404906       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:10:45.927574       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:10:46.414726       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:11:15.935389       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:11:16.422346       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:11:45.941902       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:11:46.430030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:12:15.948144       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:12:16.437580       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0610 12:12:45.954394       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:12:46.445773       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0610 12:13:10.024320       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="271.12µs"
	E0610 12:13:15.959725       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:13:16.453061       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0610 12:13:23.019876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="66.177µs"
	E0610 12:13:45.966170       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0610 12:13:46.463069       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [26ba3974695bcb24d4b2cc8663b2aa027f6b410c22fea995bdcb40dfbd617433] <==
	I0610 12:02:18.619506       1 server_linux.go:69] "Using iptables proxy"
	I0610 12:02:18.644097       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.222"]
	I0610 12:02:18.717027       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0610 12:02:18.717076       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0610 12:02:18.717094       1 server_linux.go:165] "Using iptables Proxier"
	I0610 12:02:18.721203       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 12:02:18.721428       1 server.go:872] "Version info" version="v1.30.1"
	I0610 12:02:18.721512       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 12:02:18.722857       1 config.go:192] "Starting service config controller"
	I0610 12:02:18.722887       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0610 12:02:18.722913       1 config.go:101] "Starting endpoint slice config controller"
	I0610 12:02:18.722917       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0610 12:02:18.723640       1 config.go:319] "Starting node config controller"
	I0610 12:02:18.723678       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0610 12:02:18.823586       1 shared_informer.go:320] Caches are synced for service config
	I0610 12:02:18.823571       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0610 12:02:18.823821       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [622e3a8adfbcb60a4cf30c281f0c60f9d7c3bff06b1cf111b2cc27d0692eebf5] <==
	W0610 12:02:00.455285       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 12:02:00.455310       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 12:02:01.325571       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 12:02:01.325614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 12:02:01.334373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 12:02:01.334446       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 12:02:01.347034       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 12:02:01.347124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0610 12:02:01.518886       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 12:02:01.519561       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 12:02:01.536357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 12:02:01.536641       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 12:02:01.543964       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 12:02:01.544118       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0610 12:02:01.591526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 12:02:01.592685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 12:02:01.641782       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 12:02:01.641811       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 12:02:01.652518       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 12:02:01.652560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 12:02:01.670081       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 12:02:01.671304       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0610 12:02:01.681136       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 12:02:01.681234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 12:02:04.839248       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jun 10 12:12:03 default-k8s-diff-port-281114 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:12:03 default-k8s-diff-port-281114 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:12:03 default-k8s-diff-port-281114 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:12:16 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:12:16.003152    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:12:31 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:12:31.005087    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:12:45 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:12:45.003613    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:12:59 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:12:59.017230    3901 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 10 12:12:59 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:12:59.017327    3901 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jun 10 12:12:59 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:12:59.017693    3901 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cj5hx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathE
xpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,Stdi
nOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-569cc877fc-j58s9_kube-system(f1c91612-b967-447e-bc71-13ba0d11864b): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jun 10 12:12:59 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:12:59.017760    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:13:03 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:13:03.019173    3901 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:13:03 default-k8s-diff-port-281114 kubelet[3901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:13:03 default-k8s-diff-port-281114 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:13:03 default-k8s-diff-port-281114 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:13:03 default-k8s-diff-port-281114 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:13:10 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:13:10.005291    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:13:23 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:13:23.005441    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:13:38 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:13:38.007874    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:13:52 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:13:52.003983    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	Jun 10 12:14:03 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:14:03.021426    3901 iptables.go:577] "Could not set up iptables canary" err=<
	Jun 10 12:14:03 default-k8s-diff-port-281114 kubelet[3901]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jun 10 12:14:03 default-k8s-diff-port-281114 kubelet[3901]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 12:14:03 default-k8s-diff-port-281114 kubelet[3901]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 12:14:03 default-k8s-diff-port-281114 kubelet[3901]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jun 10 12:14:07 default-k8s-diff-port-281114 kubelet[3901]: E0610 12:14:07.006902    3901 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-j58s9" podUID="f1c91612-b967-447e-bc71-13ba0d11864b"
	
	
	==> storage-provisioner [e665a2fb5aecc808097f2fc05d79904e306ff78e8236dae6c9f7e09bce5e7d10] <==
	I0610 12:02:18.986580       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 12:02:19.001826       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 12:02:19.001932       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 12:02:19.016570       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 12:02:19.016928       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b7af0c67-f70d-456a-83bf-769aabe5eb5d", APIVersion:"v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-281114_03d23b81-e99f-4aae-8541-a05706e8c2c8 became leader
	I0610 12:02:19.016961       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-281114_03d23b81-e99f-4aae-8541-a05706e8c2c8!
	I0610 12:02:19.125265       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-281114_03d23b81-e99f-4aae-8541-a05706e8c2c8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-281114 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-j58s9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-281114 describe pod metrics-server-569cc877fc-j58s9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-281114 describe pod metrics-server-569cc877fc-j58s9: exit status 1 (59.725257ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-j58s9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-281114 describe pod metrics-server-569cc877fc-j58s9: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (167.33s)
E0610 12:14:12.452716   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 12:14:15.798909   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory

                                                
                                    

Test pass (249/317)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 48.86
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.1/json-events 12.03
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.06
18 TestDownloadOnly/v1.30.1/DeleteAll 0.13
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
22 TestOffline 100.66
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 143.17
29 TestAddons/parallel/Registry 17.01
31 TestAddons/parallel/InspektorGadget 23.99
33 TestAddons/parallel/HelmTiller 12.98
35 TestAddons/parallel/CSI 87.02
36 TestAddons/parallel/Headlamp 14.28
37 TestAddons/parallel/CloudSpanner 5.55
38 TestAddons/parallel/LocalPath 12.1
39 TestAddons/parallel/NvidiaDevicePlugin 6.48
40 TestAddons/parallel/Yakd 6.01
44 TestAddons/serial/GCPAuth/Namespaces 0.13
46 TestCertOptions 75.29
47 TestCertExpiration 283.12
49 TestForceSystemdFlag 55.9
50 TestForceSystemdEnv 78.67
52 TestKVMDriverInstallOrUpdate 4.12
56 TestErrorSpam/setup 38.59
57 TestErrorSpam/start 0.35
58 TestErrorSpam/status 0.73
59 TestErrorSpam/pause 1.5
60 TestErrorSpam/unpause 1.58
61 TestErrorSpam/stop 5.76
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 55.52
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 54.76
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.76
73 TestFunctional/serial/CacheCmd/cache/add_local 2.43
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.99
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 40.98
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.44
84 TestFunctional/serial/LogsFileCmd 1.44
85 TestFunctional/serial/InvalidService 4.22
87 TestFunctional/parallel/ConfigCmd 0.31
88 TestFunctional/parallel/DashboardCmd 30.11
89 TestFunctional/parallel/DryRun 0.28
90 TestFunctional/parallel/InternationalLanguage 0.15
91 TestFunctional/parallel/StatusCmd 1.3
95 TestFunctional/parallel/ServiceCmdConnect 9.58
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 50.65
99 TestFunctional/parallel/SSHCmd 0.44
100 TestFunctional/parallel/CpCmd 1.28
101 TestFunctional/parallel/MySQL 27.87
102 TestFunctional/parallel/FileSync 0.19
103 TestFunctional/parallel/CertSync 1.55
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
111 TestFunctional/parallel/License 1.01
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
116 TestFunctional/parallel/ImageCommands/ImageBuild 7.58
117 TestFunctional/parallel/ImageCommands/Setup 1.95
118 TestFunctional/parallel/ServiceCmd/DeployApp 10.19
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.22
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.66
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.33
131 TestFunctional/parallel/ServiceCmd/List 0.38
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
133 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
134 TestFunctional/parallel/ServiceCmd/Format 0.43
135 TestFunctional/parallel/ServiceCmd/URL 0.43
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
137 TestFunctional/parallel/ProfileCmd/profile_list 0.38
138 TestFunctional/parallel/MountCmd/any-port 8.91
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
143 TestFunctional/parallel/Version/short 0.05
144 TestFunctional/parallel/Version/components 0.76
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.07
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.26
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.45
149 TestFunctional/parallel/MountCmd/specific-port 1.91
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
151 TestFunctional/delete_addon-resizer_images 0.07
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestMultiControlPlane/serial/StartCluster 209.78
158 TestMultiControlPlane/serial/DeployApp 6.61
159 TestMultiControlPlane/serial/PingHostFromPods 1.18
160 TestMultiControlPlane/serial/AddWorkerNode 47.81
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
163 TestMultiControlPlane/serial/CopyFile 12.55
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
169 TestMultiControlPlane/serial/DeleteSecondaryNode 17.11
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
178 TestJSONOutput/start/Command 96.32
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.68
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.6
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.36
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.19
206 TestMainNoArgs 0.04
207 TestMinikubeProfile 81.83
210 TestMountStart/serial/StartWithMountFirst 24.28
211 TestMountStart/serial/VerifyMountFirst 0.37
212 TestMountStart/serial/StartWithMountSecond 27.57
213 TestMountStart/serial/VerifyMountSecond 0.36
214 TestMountStart/serial/DeleteFirst 0.68
215 TestMountStart/serial/VerifyMountPostDelete 0.36
216 TestMountStart/serial/Stop 1.26
217 TestMountStart/serial/RestartStopped 22.4
218 TestMountStart/serial/VerifyMountPostStop 0.37
221 TestMultiNode/serial/FreshStart2Nodes 99.78
222 TestMultiNode/serial/DeployApp2Nodes 5.31
223 TestMultiNode/serial/PingHostFrom2Pods 0.74
224 TestMultiNode/serial/AddNode 37.06
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.22
227 TestMultiNode/serial/CopyFile 7.06
228 TestMultiNode/serial/StopNode 2.25
229 TestMultiNode/serial/StartAfterStop 28.49
231 TestMultiNode/serial/DeleteNode 2.25
233 TestMultiNode/serial/RestartMultiNode 172.99
234 TestMultiNode/serial/ValidateNameConflict 44.53
241 TestScheduledStopUnix 111.02
245 TestRunningBinaryUpgrade 242.34
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 68.26
266 TestNetworkPlugins/group/false 3.02
270 TestNoKubernetes/serial/StartWithStopK8s 50.92
271 TestNoKubernetes/serial/Start 57.79
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
273 TestNoKubernetes/serial/ProfileList 1.78
274 TestNoKubernetes/serial/Stop 1.3
275 TestNoKubernetes/serial/StartNoArgs 46.2
277 TestPause/serial/Start 63.38
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
279 TestStoppedBinaryUpgrade/Setup 2.3
280 TestStoppedBinaryUpgrade/Upgrade 122.84
281 TestPause/serial/SecondStartNoReconfiguration 53.9
282 TestPause/serial/Pause 0.65
283 TestPause/serial/VerifyStatus 0.25
284 TestPause/serial/Unpause 0.63
285 TestPause/serial/PauseAgain 0.76
286 TestPause/serial/DeletePaused 1.01
287 TestPause/serial/VerifyDeletedResources 14.53
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
292 TestStartStop/group/embed-certs/serial/FirstStart 69.08
294 TestStartStop/group/no-preload/serial/FirstStart 129.37
295 TestStartStop/group/embed-certs/serial/DeployApp 10.32
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.03
298 TestStartStop/group/no-preload/serial/DeployApp 8.29
299 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.94
302 TestStartStop/group/embed-certs/serial/SecondStart 635.88
307 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 303.08
308 TestStartStop/group/no-preload/serial/SecondStart 574.28
309 TestStartStop/group/old-k8s-version/serial/Stop 6.29
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.32
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 627.56
325 TestStartStop/group/newest-cni/serial/FirstStart 55.77
326 TestNetworkPlugins/group/auto/Start 113.97
327 TestNetworkPlugins/group/kindnet/Start 87.73
328 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.2
330 TestStartStop/group/newest-cni/serial/Stop 8.37
331 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
332 TestStartStop/group/newest-cni/serial/SecondStart 43.98
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
336 TestStartStop/group/newest-cni/serial/Pause 2.52
337 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
338 TestNetworkPlugins/group/calico/Start 91.58
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
340 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
341 TestNetworkPlugins/group/auto/KubeletFlags 0.23
342 TestNetworkPlugins/group/auto/NetCatPod 10.25
343 TestNetworkPlugins/group/kindnet/DNS 0.16
344 TestNetworkPlugins/group/kindnet/Localhost 0.18
345 TestNetworkPlugins/group/kindnet/HairPin 0.13
346 TestNetworkPlugins/group/auto/DNS 0.16
347 TestNetworkPlugins/group/auto/Localhost 0.12
348 TestNetworkPlugins/group/auto/HairPin 0.13
350 TestNetworkPlugins/group/custom-flannel/Start 99
351 TestNetworkPlugins/group/enable-default-cni/Start 120.13
352 TestNetworkPlugins/group/calico/ControllerPod 6.01
353 TestNetworkPlugins/group/calico/KubeletFlags 0.22
354 TestNetworkPlugins/group/calico/NetCatPod 11.28
355 TestNetworkPlugins/group/calico/DNS 0.16
356 TestNetworkPlugins/group/calico/Localhost 0.13
357 TestNetworkPlugins/group/calico/HairPin 0.13
358 TestNetworkPlugins/group/flannel/Start 81.62
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.27
361 TestNetworkPlugins/group/custom-flannel/DNS 0.17
362 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
363 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
364 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
365 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
366 TestNetworkPlugins/group/bridge/Start 98.54
367 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
368 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
369 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
370 TestNetworkPlugins/group/flannel/ControllerPod 6.01
371 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
372 TestNetworkPlugins/group/flannel/NetCatPod 10.23
373 TestNetworkPlugins/group/flannel/DNS 0.15
374 TestNetworkPlugins/group/flannel/Localhost 0.14
375 TestNetworkPlugins/group/flannel/HairPin 0.12
376 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
377 TestNetworkPlugins/group/bridge/NetCatPod 11.21
378 TestNetworkPlugins/group/bridge/DNS 0.14
379 TestNetworkPlugins/group/bridge/Localhost 0.12
380 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (48.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-996636 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-996636 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (48.857258935s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (48.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-996636
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-996636: exit status 85 (59.954385ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-996636 | jenkins | v1.33.1 | 10 Jun 24 10:20 UTC |          |
	|         | -p download-only-996636        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:20:46
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:20:46.986080   10770 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:20:46.986328   10770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:20:46.986337   10770 out.go:304] Setting ErrFile to fd 2...
	I0610 10:20:46.986341   10770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:20:46.986514   10770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	W0610 10:20:46.986632   10770 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19046-3880/.minikube/config/config.json: open /home/jenkins/minikube-integration/19046-3880/.minikube/config/config.json: no such file or directory
	I0610 10:20:46.987258   10770 out.go:298] Setting JSON to true
	I0610 10:20:46.988121   10770 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":188,"bootTime":1718014659,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:20:46.988187   10770 start.go:139] virtualization: kvm guest
	I0610 10:20:46.990803   10770 out.go:97] [download-only-996636] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:20:46.992272   10770 out.go:169] MINIKUBE_LOCATION=19046
	W0610 10:20:46.990907   10770 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 10:20:46.990949   10770 notify.go:220] Checking for updates...
	I0610 10:20:46.994951   10770 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:20:46.996235   10770 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:20:46.997590   10770 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:20:46.998815   10770 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0610 10:20:47.001443   10770 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 10:20:47.001693   10770 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:20:47.501423   10770 out.go:97] Using the kvm2 driver based on user configuration
	I0610 10:20:47.501452   10770 start.go:297] selected driver: kvm2
	I0610 10:20:47.501457   10770 start.go:901] validating driver "kvm2" against <nil>
	I0610 10:20:47.501753   10770 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:20:47.501860   10770 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 10:20:47.516313   10770 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 10:20:47.516372   10770 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 10:20:47.516779   10770 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0610 10:20:47.516911   10770 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 10:20:47.516941   10770 cni.go:84] Creating CNI manager for ""
	I0610 10:20:47.516996   10770 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 10:20:47.517013   10770 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:20:47.517067   10770 start.go:340] cluster config:
	{Name:download-only-996636 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-996636 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:20:47.517231   10770 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:20:47.519177   10770 out.go:97] Downloading VM boot image ...
	I0610 10:20:47.519207   10770 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19046-3880/.minikube/cache/iso/amd64/minikube-v1.33.1-1717668912-19038-amd64.iso
	I0610 10:20:56.141531   10770 out.go:97] Starting "download-only-996636" primary control-plane node in "download-only-996636" cluster
	I0610 10:20:56.141563   10770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0610 10:20:56.248569   10770 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0610 10:20:56.248603   10770 cache.go:56] Caching tarball of preloaded images
	I0610 10:20:56.248773   10770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0610 10:20:56.250605   10770 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0610 10:20:56.250630   10770 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0610 10:20:56.349960   10770 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0610 10:21:10.309959   10770 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0610 10:21:10.310051   10770 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0610 10:21:11.333997   10770 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0610 10:21:11.334310   10770 profile.go:143] Saving config to /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/download-only-996636/config.json ...
	I0610 10:21:11.334336   10770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/download-only-996636/config.json: {Name:mk54d7ffc4ae7f9ddbc50b3332f4600fb911095e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 10:21:11.334482   10770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0610 10:21:11.334672   10770 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19046-3880/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-996636 host does not exist
	  To start a cluster, run: "minikube start -p download-only-996636"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-996636
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (12.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-938190 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-938190 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.027094373s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (12.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-938190
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-938190: exit status 85 (56.677166ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-996636 | jenkins | v1.33.1 | 10 Jun 24 10:20 UTC |                     |
	|         | -p download-only-996636        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| delete  | -p download-only-996636        | download-only-996636 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC | 10 Jun 24 10:21 UTC |
	| start   | -o=json --download-only        | download-only-938190 | jenkins | v1.33.1 | 10 Jun 24 10:21 UTC |                     |
	|         | -p download-only-938190        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/06/10 10:21:36
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 10:21:36.154418   11105 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:21:36.154668   11105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:21:36.154677   11105 out.go:304] Setting ErrFile to fd 2...
	I0610 10:21:36.154681   11105 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:21:36.154884   11105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:21:36.155445   11105 out.go:298] Setting JSON to true
	I0610 10:21:36.156272   11105 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":237,"bootTime":1718014659,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:21:36.156328   11105 start.go:139] virtualization: kvm guest
	I0610 10:21:36.158636   11105 out.go:97] [download-only-938190] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:21:36.160262   11105 out.go:169] MINIKUBE_LOCATION=19046
	I0610 10:21:36.158755   11105 notify.go:220] Checking for updates...
	I0610 10:21:36.163128   11105 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:21:36.164639   11105 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:21:36.165980   11105 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:21:36.167336   11105 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0610 10:21:36.170078   11105 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 10:21:36.170382   11105 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:21:36.204834   11105 out.go:97] Using the kvm2 driver based on user configuration
	I0610 10:21:36.204858   11105 start.go:297] selected driver: kvm2
	I0610 10:21:36.204864   11105 start.go:901] validating driver "kvm2" against <nil>
	I0610 10:21:36.205218   11105 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:21:36.205295   11105 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19046-3880/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0610 10:21:36.221247   11105 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0610 10:21:36.221293   11105 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0610 10:21:36.222028   11105 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0610 10:21:36.222195   11105 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 10:21:36.222253   11105 cni.go:84] Creating CNI manager for ""
	I0610 10:21:36.222270   11105 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0610 10:21:36.222279   11105 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 10:21:36.222337   11105 start.go:340] cluster config:
	{Name:download-only-938190 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-938190 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:21:36.222488   11105 iso.go:125] acquiring lock: {Name:mk85871dbcb370cef71a4aa2ff7ef43503908c3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 10:21:36.224181   11105 out.go:97] Starting "download-only-938190" primary control-plane node in "download-only-938190" cluster
	I0610 10:21:36.224200   11105 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:21:36.623554   11105 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0610 10:21:36.623603   11105 cache.go:56] Caching tarball of preloaded images
	I0610 10:21:36.623897   11105 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0610 10:21:36.625655   11105 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0610 10:21:36.625673   11105 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 ...
	I0610 10:21:36.723021   11105 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:a8c8ea593b2bc93a46ce7b040a44f86d -> /home/jenkins/minikube-integration/19046-3880/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-938190 host does not exist
	  To start a cluster, run: "minikube start -p download-only-938190"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-938190
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-775609 --alsologtostderr --binary-mirror http://127.0.0.1:34103 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-775609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-775609
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (100.66s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-079649 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-079649 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.868511442s)
helpers_test.go:175: Cleaning up "offline-crio-079649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-079649
--- PASS: TestOffline (100.66s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-021732
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-021732: exit status 85 (45.958122ms)

                                                
                                                
-- stdout --
	* Profile "addons-021732" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-021732"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-021732
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-021732: exit status 85 (44.498009ms)

                                                
                                                
-- stdout --
	* Profile "addons-021732" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-021732"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (143.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-021732 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-021732 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m23.170146322s)
--- PASS: TestAddons/Setup (143.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 19.161031ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xmm5t" [50b19bb8-aabd-4c89-a304-877505b561a3] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006196869s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lq94h" [4b7b9e8d-e9e9-450e-877e-156e3a37a859] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005607672s
addons_test.go:342: (dbg) Run:  kubectl --context addons-021732 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-021732 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-021732 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.271312561s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 ip
2024/06/10 10:24:28 [DEBUG] GET http://192.168.39.244:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (23.99s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5tv8k" [488b8a11-6759-426f-b3f0-a5b0f7f0ca17] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005546299s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-021732
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-021732: (17.986398516s)
--- PASS: TestAddons/parallel/InspektorGadget (23.99s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.98s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 19.584599ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-86c76" [3257a893-b201-4088-be48-fb02698a0350] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.00676258s
addons_test.go:475: (dbg) Run:  kubectl --context addons-021732 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-021732 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.226058146s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (87.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 7.061135ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-021732 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-021732 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [8a64af0f-545d-4bed-abc5-1187ea5cd9d7] Pending
helpers_test.go:344: "task-pv-pod" [8a64af0f-545d-4bed-abc5-1187ea5cd9d7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [8a64af0f-545d-4bed-abc5-1187ea5cd9d7] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004254636s
addons_test.go:586: (dbg) Run:  kubectl --context addons-021732 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-021732 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-021732 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-021732 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-021732 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-021732 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-021732 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d0abbf55-57a7-4ec8-ab0c-73c4d4b6bae4] Pending
helpers_test.go:344: "task-pv-pod-restore" [d0abbf55-57a7-4ec8-ab0c-73c4d4b6bae4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d0abbf55-57a7-4ec8-ab0c-73c4d4b6bae4] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004329162s
addons_test.go:628: (dbg) Run:  kubectl --context addons-021732 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-021732 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-021732 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-021732 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.698160233s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (87.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-021732 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-021732 --alsologtostderr -v=1: (1.278938551s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7fc69f7444-b726p" [53f367ca-294c-4305-b2f4-54c5bb185ad9] Pending
helpers_test.go:344: "headlamp-7fc69f7444-b726p" [53f367ca-294c-4305-b2f4-54c5bb185ad9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7fc69f7444-b726p" [53f367ca-294c-4305-b2f4-54c5bb185ad9] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004206932s
--- PASS: TestAddons/parallel/Headlamp (14.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-9fd2g" [ed6dbd72-e0e2-42f3-b17f-4188c8218f6b] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004596738s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-021732
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-021732 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-021732 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-021732 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3888e489-d4bb-4a8c-81e9-031d7ccc913c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3888e489-d4bb-4a8c-81e9-031d7ccc913c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3888e489-d4bb-4a8c-81e9-031d7ccc913c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004003245s
addons_test.go:992: (dbg) Run:  kubectl --context addons-021732 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 ssh "cat /opt/local-path-provisioner/pvc-be3afae5-1392-4466-a1db-28b1c658ba01_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-021732 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-021732 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-021732 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2zf77" [6e61695c-8992-480f-826d-23a9f83617e8] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00511832s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-021732
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-p8pv2" [c0ef4698-bf75-4680-bcfa-95167d27a615] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004026099s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-021732 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-021732 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (75.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-151326 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-151326 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m14.062616468s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-151326 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-151326 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-151326 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-151326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-151326
--- PASS: TestCertOptions (75.29s)

                                                
                                    
x
+
TestCertExpiration (283.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-324836 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-324836 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m9.449251956s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-324836 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-324836 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (32.58781847s)
helpers_test.go:175: Cleaning up "cert-expiration-324836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-324836
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-324836: (1.078143287s)
--- PASS: TestCertExpiration (283.12s)

                                                
                                    
x
+
TestForceSystemdFlag (55.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-823553 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-823553 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.746975192s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-823553 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-823553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-823553
--- PASS: TestForceSystemdFlag (55.90s)

                                                
                                    
x
+
TestForceSystemdEnv (78.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-355595 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-355595 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.913819552s)
helpers_test.go:175: Cleaning up "force-systemd-env-355595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-355595
--- PASS: TestForceSystemdEnv (78.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.12s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.12s)

                                                
                                    
x
+
TestErrorSpam/setup (38.59s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-215030 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-215030 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-215030 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-215030 --driver=kvm2  --container-runtime=crio: (38.591867102s)
--- PASS: TestErrorSpam/setup (38.59s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 pause
--- PASS: TestErrorSpam/pause (1.50s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (5.76s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 stop: (2.280935156s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 stop: (1.535201999s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-215030 --log_dir /tmp/nospam-215030 stop: (1.944800744s)
--- PASS: TestErrorSpam/stop (5.76s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19046-3880/.minikube/files/etc/test/nested/copy/10758/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.52s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-647968 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0610 10:34:12.453148   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:12.458792   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:12.469097   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:12.489412   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:12.529728   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:12.610153   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:12.770679   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:13.091285   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:13.732182   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:15.012682   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:17.573675   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:22.694152   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:32.935356   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:34:53.415504   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-647968 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.52019617s)
--- PASS: TestFunctional/serial/StartWithProxy (55.52s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.76s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-647968 --alsologtostderr -v=8
E0610 10:35:34.375815   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-647968 --alsologtostderr -v=8: (54.761452022s)
functional_test.go:659: soft start took 54.762112306s for "functional-647968" cluster.
--- PASS: TestFunctional/serial/SoftStart (54.76s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-647968 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 cache add registry.k8s.io/pause:3.1: (1.569185735s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 cache add registry.k8s.io/pause:3.3: (1.675239688s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 cache add registry.k8s.io/pause:latest: (1.518866528s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-647968 /tmp/TestFunctionalserialCacheCmdcacheadd_local3978386431/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 cache add minikube-local-cache-test:functional-647968
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 cache add minikube-local-cache-test:functional-647968: (2.098696242s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 cache delete minikube-local-cache-test:functional-647968
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-647968
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-647968 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (217.431004ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 cache reload: (1.303118866s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 kubectl -- --context functional-647968 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-647968 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-647968 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-647968 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.98280298s)
functional_test.go:757: restart took 40.98293173s for "functional-647968" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-647968 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 logs: (1.442968089s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 logs --file /tmp/TestFunctionalserialLogsFileCmd2941605935/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 logs --file /tmp/TestFunctionalserialLogsFileCmd2941605935/001/logs.txt: (1.436283684s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-647968 apply -f testdata/invalidsvc.yaml
E0610 10:36:56.296838   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-647968
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-647968: exit status 115 (268.588294ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.252:32760 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-647968 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-647968 config get cpus: exit status 14 (49.818719ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-647968 config get cpus: exit status 14 (43.341899ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-647968 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-647968 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 21193: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-647968 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-647968 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.922021ms)

                                                
                                                
-- stdout --
	* [functional-647968] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:37:11.214108   20162 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:37:11.214319   20162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:37:11.214383   20162 out.go:304] Setting ErrFile to fd 2...
	I0610 10:37:11.214404   20162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:37:11.214851   20162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:37:11.215409   20162 out.go:298] Setting JSON to false
	I0610 10:37:11.216268   20162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1172,"bootTime":1718014659,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:37:11.216327   20162 start.go:139] virtualization: kvm guest
	I0610 10:37:11.218461   20162 out.go:177] * [functional-647968] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 10:37:11.219718   20162 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:37:11.219722   20162 notify.go:220] Checking for updates...
	I0610 10:37:11.221208   20162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:37:11.222560   20162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:37:11.223967   20162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:37:11.225607   20162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 10:37:11.227020   20162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:37:11.228800   20162 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:37:11.229424   20162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:37:11.229510   20162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:37:11.244709   20162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40169
	I0610 10:37:11.245261   20162 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:37:11.245852   20162 main.go:141] libmachine: Using API Version  1
	I0610 10:37:11.245880   20162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:37:11.246311   20162 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:37:11.246510   20162 main.go:141] libmachine: (functional-647968) Calling .DriverName
	I0610 10:37:11.246839   20162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:37:11.247219   20162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:37:11.247252   20162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:37:11.261718   20162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44461
	I0610 10:37:11.262163   20162 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:37:11.262626   20162 main.go:141] libmachine: Using API Version  1
	I0610 10:37:11.262650   20162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:37:11.262943   20162 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:37:11.263125   20162 main.go:141] libmachine: (functional-647968) Calling .DriverName
	I0610 10:37:11.298792   20162 out.go:177] * Using the kvm2 driver based on existing profile
	I0610 10:37:11.300120   20162 start.go:297] selected driver: kvm2
	I0610 10:37:11.300134   20162 start.go:901] validating driver "kvm2" against &{Name:functional-647968 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-647968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.252 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:37:11.300252   20162 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:37:11.302180   20162 out.go:177] 
	W0610 10:37:11.303522   20162 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0610 10:37:11.304890   20162 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-647968 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-647968 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-647968 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.300489ms)

                                                
                                                
-- stdout --
	* [functional-647968] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 10:37:11.503168   20242 out.go:291] Setting OutFile to fd 1 ...
	I0610 10:37:11.503526   20242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:37:11.503605   20242 out.go:304] Setting ErrFile to fd 2...
	I0610 10:37:11.503623   20242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 10:37:11.504146   20242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 10:37:11.504663   20242 out.go:298] Setting JSON to false
	I0610 10:37:11.505574   20242 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1172,"bootTime":1718014659,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 10:37:11.505632   20242 start.go:139] virtualization: kvm guest
	I0610 10:37:11.507599   20242 out.go:177] * [functional-647968] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0610 10:37:11.508896   20242 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 10:37:11.510188   20242 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 10:37:11.508848   20242 notify.go:220] Checking for updates...
	I0610 10:37:11.512801   20242 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 10:37:11.514135   20242 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 10:37:11.515499   20242 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 10:37:11.516759   20242 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 10:37:11.518477   20242 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 10:37:11.519145   20242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:37:11.519208   20242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:37:11.534544   20242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0610 10:37:11.534990   20242 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:37:11.535641   20242 main.go:141] libmachine: Using API Version  1
	I0610 10:37:11.535668   20242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:37:11.536072   20242 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:37:11.536286   20242 main.go:141] libmachine: (functional-647968) Calling .DriverName
	I0610 10:37:11.536606   20242 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 10:37:11.537046   20242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 10:37:11.537090   20242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 10:37:11.551506   20242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43863
	I0610 10:37:11.552007   20242 main.go:141] libmachine: () Calling .GetVersion
	I0610 10:37:11.552547   20242 main.go:141] libmachine: Using API Version  1
	I0610 10:37:11.552576   20242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 10:37:11.552970   20242 main.go:141] libmachine: () Calling .GetMachineName
	I0610 10:37:11.553146   20242 main.go:141] libmachine: (functional-647968) Calling .DriverName
	I0610 10:37:11.595643   20242 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0610 10:37:11.597042   20242 start.go:297] selected driver: kvm2
	I0610 10:37:11.597061   20242 start.go:901] validating driver "kvm2" against &{Name:functional-647968 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19038/minikube-v1.33.1-1717668912-19038-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1717668449-19038@sha256:30d191eb345232f513c52f7ac036e7a34a8cc441d88353f92985384bcddf00d6 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-647968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.252 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0610 10:37:11.597223   20242 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 10:37:11.599729   20242 out.go:177] 
	W0610 10:37:11.601160   20242 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0610 10:37:11.602589   20242 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-647968 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-647968 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-lsshq" [829ae16e-2d6b-4516-97ff-aea38183c619] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-lsshq" [829ae16e-2d6b-4516-97ff-aea38183c619] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004056249s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.252:32672
functional_test.go:1671: http://192.168.39.252:32672: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-lsshq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.252:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.252:32672
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3c0730ab-e23b-42dc-a011-89c3dec41643] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004587923s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-647968 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-647968 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-647968 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-647968 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5a258c04-b409-463a-b301-79689a62134a] Pending
helpers_test.go:344: "sp-pod" [5a258c04-b409-463a-b301-79689a62134a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5a258c04-b409-463a-b301-79689a62134a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004388108s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-647968 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-647968 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-647968 delete -f testdata/storage-provisioner/pod.yaml: (2.464380206s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-647968 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0e91a2b2-81a6-48c8-bb95-337a0317ba0f] Pending
helpers_test.go:344: "sp-pod" [0e91a2b2-81a6-48c8-bb95-337a0317ba0f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0e91a2b2-81a6-48c8-bb95-337a0317ba0f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004663559s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-647968 exec sp-pod -- ls /tmp/mount
2024/06/10 10:37:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh -n functional-647968 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 cp functional-647968:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1926668518/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh -n functional-647968 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh -n functional-647968 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-647968 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-f6vsj" [5f9fcc14-352a-4575-83c9-19f4b8f29284] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-f6vsj" [5f9fcc14-352a-4575-83c9-19f4b8f29284] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.597733101s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-647968 exec mysql-64454c8b5c-f6vsj -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-647968 exec mysql-64454c8b5c-f6vsj -- mysql -ppassword -e "show databases;": exit status 1 (130.781105ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-647968 exec mysql-64454c8b5c-f6vsj -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-647968 exec mysql-64454c8b5c-f6vsj -- mysql -ppassword -e "show databases;": exit status 1 (126.606618ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-647968 exec mysql-64454c8b5c-f6vsj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/10758/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "sudo cat /etc/test/nested/copy/10758/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/10758.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "sudo cat /etc/ssl/certs/10758.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/10758.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "sudo cat /usr/share/ca-certificates/10758.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/107582.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "sudo cat /etc/ssl/certs/107582.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/107582.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "sudo cat /usr/share/ca-certificates/107582.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-647968 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-647968 ssh "sudo systemctl is-active docker": exit status 1 (217.993923ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-647968 ssh "sudo systemctl is-active containerd": exit status 1 (223.509625ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-linux-amd64 license: (1.0071184s)
--- PASS: TestFunctional/parallel/License (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-647968 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-647968
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-647968
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-647968 image ls --format short --alsologtostderr:
I0610 10:37:24.025367   21392 out.go:291] Setting OutFile to fd 1 ...
I0610 10:37:24.025605   21392 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:37:24.025613   21392 out.go:304] Setting ErrFile to fd 2...
I0610 10:37:24.025617   21392 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:37:24.025799   21392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
I0610 10:37:24.026330   21392 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0610 10:37:24.026419   21392 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0610 10:37:24.027255   21392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0610 10:37:24.027316   21392 main.go:141] libmachine: Launching plugin server for driver kvm2
I0610 10:37:24.042336   21392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
I0610 10:37:24.042796   21392 main.go:141] libmachine: () Calling .GetVersion
I0610 10:37:24.043310   21392 main.go:141] libmachine: Using API Version  1
I0610 10:37:24.043333   21392 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 10:37:24.043716   21392 main.go:141] libmachine: () Calling .GetMachineName
I0610 10:37:24.043912   21392 main.go:141] libmachine: (functional-647968) Calling .GetState
I0610 10:37:24.046072   21392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0610 10:37:24.046123   21392 main.go:141] libmachine: Launching plugin server for driver kvm2
I0610 10:37:24.060706   21392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34629
I0610 10:37:24.061180   21392 main.go:141] libmachine: () Calling .GetVersion
I0610 10:37:24.061716   21392 main.go:141] libmachine: Using API Version  1
I0610 10:37:24.061744   21392 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 10:37:24.062114   21392 main.go:141] libmachine: () Calling .GetMachineName
I0610 10:37:24.062296   21392 main.go:141] libmachine: (functional-647968) Calling .DriverName
I0610 10:37:24.062504   21392 ssh_runner.go:195] Run: systemctl --version
I0610 10:37:24.062533   21392 main.go:141] libmachine: (functional-647968) Calling .GetSSHHostname
I0610 10:37:24.065469   21392 main.go:141] libmachine: (functional-647968) DBG | domain functional-647968 has defined MAC address 52:54:00:56:75:47 in network mk-functional-647968
I0610 10:37:24.065983   21392 main.go:141] libmachine: (functional-647968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:75:47", ip: ""} in network mk-functional-647968: {Iface:virbr1 ExpiryTime:2024-06-10 11:34:22 +0000 UTC Type:0 Mac:52:54:00:56:75:47 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:functional-647968 Clientid:01:52:54:00:56:75:47}
I0610 10:37:24.066019   21392 main.go:141] libmachine: (functional-647968) DBG | domain functional-647968 has defined IP address 192.168.39.252 and MAC address 52:54:00:56:75:47 in network mk-functional-647968
I0610 10:37:24.066161   21392 main.go:141] libmachine: (functional-647968) Calling .GetSSHPort
I0610 10:37:24.066418   21392 main.go:141] libmachine: (functional-647968) Calling .GetSSHKeyPath
I0610 10:37:24.066573   21392 main.go:141] libmachine: (functional-647968) Calling .GetSSHUsername
I0610 10:37:24.066728   21392 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/functional-647968/id_rsa Username:docker}
I0610 10:37:24.156624   21392 ssh_runner.go:195] Run: sudo crictl images --output json
I0610 10:37:24.213280   21392 main.go:141] libmachine: Making call to close driver server
I0610 10:37:24.213297   21392 main.go:141] libmachine: (functional-647968) Calling .Close
I0610 10:37:24.213541   21392 main.go:141] libmachine: Successfully made call to close driver server
I0610 10:37:24.213561   21392 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 10:37:24.213571   21392 main.go:141] libmachine: Making call to close driver server
I0610 10:37:24.213580   21392 main.go:141] libmachine: (functional-647968) Calling .Close
I0610 10:37:24.213804   21392 main.go:141] libmachine: (functional-647968) DBG | Closing plugin on server side
I0610 10:37:24.213810   21392 main.go:141] libmachine: Successfully made call to close driver server
I0610 10:37:24.213829   21392 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-647968 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 4f67c83422ec7 | 192MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 25a1387cdab82 | 112MB  |
| registry.k8s.io/kube-proxy              | v1.30.1            | 747097150317f | 85.9MB |
| registry.k8s.io/kube-scheduler          | v1.30.1            | a52dc94f0a912 | 63MB   |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-647968  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 91be940803172 | 118MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-647968  | 359c81fb537d2 | 3.33kB |
| localhost/my-image                      | functional-647968  | bb7e9d75d7258 | 1.47MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-647968 image ls --format table --alsologtostderr:
I0610 10:37:32.331952   21573 out.go:291] Setting OutFile to fd 1 ...
I0610 10:37:32.332064   21573 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:37:32.332073   21573 out.go:304] Setting ErrFile to fd 2...
I0610 10:37:32.332077   21573 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:37:32.332274   21573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
I0610 10:37:32.332849   21573 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0610 10:37:32.332985   21573 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0610 10:37:32.333385   21573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0610 10:37:32.333432   21573 main.go:141] libmachine: Launching plugin server for driver kvm2
I0610 10:37:32.347652   21573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40919
I0610 10:37:32.348119   21573 main.go:141] libmachine: () Calling .GetVersion
I0610 10:37:32.348627   21573 main.go:141] libmachine: Using API Version  1
I0610 10:37:32.348641   21573 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 10:37:32.349012   21573 main.go:141] libmachine: () Calling .GetMachineName
I0610 10:37:32.349182   21573 main.go:141] libmachine: (functional-647968) Calling .GetState
I0610 10:37:32.350861   21573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0610 10:37:32.350894   21573 main.go:141] libmachine: Launching plugin server for driver kvm2
I0610 10:37:32.364747   21573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45483
I0610 10:37:32.365214   21573 main.go:141] libmachine: () Calling .GetVersion
I0610 10:37:32.365642   21573 main.go:141] libmachine: Using API Version  1
I0610 10:37:32.365660   21573 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 10:37:32.365941   21573 main.go:141] libmachine: () Calling .GetMachineName
I0610 10:37:32.366095   21573 main.go:141] libmachine: (functional-647968) Calling .DriverName
I0610 10:37:32.366321   21573 ssh_runner.go:195] Run: systemctl --version
I0610 10:37:32.366348   21573 main.go:141] libmachine: (functional-647968) Calling .GetSSHHostname
I0610 10:37:32.369018   21573 main.go:141] libmachine: (functional-647968) DBG | domain functional-647968 has defined MAC address 52:54:00:56:75:47 in network mk-functional-647968
I0610 10:37:32.369389   21573 main.go:141] libmachine: (functional-647968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:75:47", ip: ""} in network mk-functional-647968: {Iface:virbr1 ExpiryTime:2024-06-10 11:34:22 +0000 UTC Type:0 Mac:52:54:00:56:75:47 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:functional-647968 Clientid:01:52:54:00:56:75:47}
I0610 10:37:32.369423   21573 main.go:141] libmachine: (functional-647968) DBG | domain functional-647968 has defined IP address 192.168.39.252 and MAC address 52:54:00:56:75:47 in network mk-functional-647968
I0610 10:37:32.369565   21573 main.go:141] libmachine: (functional-647968) Calling .GetSSHPort
I0610 10:37:32.369705   21573 main.go:141] libmachine: (functional-647968) Calling .GetSSHKeyPath
I0610 10:37:32.369826   21573 main.go:141] libmachine: (functional-647968) Calling .GetSSHUsername
I0610 10:37:32.369965   21573 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/functional-647968/id_rsa Username:docker}
I0610 10:37:32.455083   21573 ssh_runner.go:195] Run: sudo crictl images --output json
I0610 10:37:32.496185   21573 main.go:141] libmachine: Making call to close driver server
I0610 10:37:32.496204   21573 main.go:141] libmachine: (functional-647968) Calling .Close
I0610 10:37:32.496535   21573 main.go:141] libmachine: Successfully made call to close driver server
I0610 10:37:32.496552   21573 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 10:37:32.496566   21573 main.go:141] libmachine: Making call to close driver server
I0610 10:37:32.496573   21573 main.go:141] libmachine: (functional-647968) Calling .Close
I0610 10:37:32.496791   21573 main.go:141] libmachine: Successfully made call to close driver server
I0610 10:37:32.496808   21573 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 10:37:32.496838   21573 main.go:141] libmachine: (functional-647968) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-647968 image ls --format json --alsologtostderr:
[{"id":"b30f9b1bd29928900de82862027524fbf0a6673997ad75bef7075c196c008d07","repoDigests":["docker.io/library/e954fc274667cbb85f4aec47df3a100faff66785a787e488b55bb1ee3f915ada-tmp@sha256:13b36ede114b664c2b326ff0a8a783cc2fdafad45edba9bf4d59615de8bf2588"],"repoTags":[],"size":"1466018"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d3
60bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2c
df279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"63026504"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100","repoDigests":["docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d","docker.io/library/nginx@sha256:1445eb9c6dc5e9619346c836ef6fbd6a95092e4663f27dcfce116f051cdbd232"],"repoTags":["docker.io/library/nginx:latest"],"size":"191814165"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-co
ntainers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-647968"],"size":"34114467"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117601759"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":["registry.k8s.io/kube-controlle
r-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"112170310"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@s
ha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"359c81fb537d26f21cb46d928e47f091096b4db34e0d868f37fa46768c336879","repoDigests":["localhost/minikube-local-cache-test@sha256:0b55ccbbd9a9e1206492a5b0020eb59afaadbcb0ae6ced21185ef1475b20f7dc"],"repoTags":["localhost/minikube-local-cache-test:functional-647968"],"size":"3330"},{"id":"bb7e9d75d725802c0304b8fe9bbdb1727a37ef333f3af8cef68a686292ff3873","repoDigests":["localhost/my-image@sha256:2d9edef7c01b2daa44db08092bbfbffb4f2eaf41f12223f736817bac223a546d"],"repoTags":["localhost/my-image:functional-647968"],"size":"1468600"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":["registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa","registry.k8s.io/kube-proxy@sha256:a
1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"85933465"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-647968 image ls --format json --alsologtostderr:
I0610 10:37:32.125815   21549 out.go:291] Setting OutFile to fd 1 ...
I0610 10:37:32.126075   21549 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:37:32.126085   21549 out.go:304] Setting ErrFile to fd 2...
I0610 10:37:32.126091   21549 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:37:32.126344   21549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
I0610 10:37:32.127057   21549 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0610 10:37:32.127200   21549 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0610 10:37:32.127723   21549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0610 10:37:32.127781   21549 main.go:141] libmachine: Launching plugin server for driver kvm2
I0610 10:37:32.142172   21549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40623
I0610 10:37:32.142621   21549 main.go:141] libmachine: () Calling .GetVersion
I0610 10:37:32.143254   21549 main.go:141] libmachine: Using API Version  1
I0610 10:37:32.143283   21549 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 10:37:32.143700   21549 main.go:141] libmachine: () Calling .GetMachineName
I0610 10:37:32.143935   21549 main.go:141] libmachine: (functional-647968) Calling .GetState
I0610 10:37:32.145744   21549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0610 10:37:32.145777   21549 main.go:141] libmachine: Launching plugin server for driver kvm2
I0610 10:37:32.159695   21549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44245
I0610 10:37:32.160093   21549 main.go:141] libmachine: () Calling .GetVersion
I0610 10:37:32.160531   21549 main.go:141] libmachine: Using API Version  1
I0610 10:37:32.160568   21549 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 10:37:32.160862   21549 main.go:141] libmachine: () Calling .GetMachineName
I0610 10:37:32.161082   21549 main.go:141] libmachine: (functional-647968) Calling .DriverName
I0610 10:37:32.161272   21549 ssh_runner.go:195] Run: systemctl --version
I0610 10:37:32.161299   21549 main.go:141] libmachine: (functional-647968) Calling .GetSSHHostname
I0610 10:37:32.164104   21549 main.go:141] libmachine: (functional-647968) DBG | domain functional-647968 has defined MAC address 52:54:00:56:75:47 in network mk-functional-647968
I0610 10:37:32.164575   21549 main.go:141] libmachine: (functional-647968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:75:47", ip: ""} in network mk-functional-647968: {Iface:virbr1 ExpiryTime:2024-06-10 11:34:22 +0000 UTC Type:0 Mac:52:54:00:56:75:47 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:functional-647968 Clientid:01:52:54:00:56:75:47}
I0610 10:37:32.164596   21549 main.go:141] libmachine: (functional-647968) DBG | domain functional-647968 has defined IP address 192.168.39.252 and MAC address 52:54:00:56:75:47 in network mk-functional-647968
I0610 10:37:32.164733   21549 main.go:141] libmachine: (functional-647968) Calling .GetSSHPort
I0610 10:37:32.164924   21549 main.go:141] libmachine: (functional-647968) Calling .GetSSHKeyPath
I0610 10:37:32.165136   21549 main.go:141] libmachine: (functional-647968) Calling .GetSSHUsername
I0610 10:37:32.165320   21549 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/functional-647968/id_rsa Username:docker}
I0610 10:37:32.247345   21549 ssh_runner.go:195] Run: sudo crictl images --output json
I0610 10:37:32.284788   21549 main.go:141] libmachine: Making call to close driver server
I0610 10:37:32.284810   21549 main.go:141] libmachine: (functional-647968) Calling .Close
I0610 10:37:32.285096   21549 main.go:141] libmachine: Successfully made call to close driver server
I0610 10:37:32.285111   21549 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 10:37:32.285121   21549 main.go:141] libmachine: Making call to close driver server
I0610 10:37:32.285129   21549 main.go:141] libmachine: (functional-647968) Calling .Close
I0610 10:37:32.285377   21549 main.go:141] libmachine: (functional-647968) DBG | Closing plugin on server side
I0610 10:37:32.285387   21549 main.go:141] libmachine: Successfully made call to close driver server
I0610 10:37:32.285414   21549 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-647968 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117601759"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-647968
size: "34114467"
- id: 4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100
repoDigests:
- docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d
- docker.io/library/nginx@sha256:1445eb9c6dc5e9619346c836ef6fbd6a95092e4663f27dcfce116f051cdbd232
repoTags:
- docker.io/library/nginx:latest
size: "191814165"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "63026504"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "112170310"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "85933465"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 359c81fb537d26f21cb46d928e47f091096b4db34e0d868f37fa46768c336879
repoDigests:
- localhost/minikube-local-cache-test@sha256:0b55ccbbd9a9e1206492a5b0020eb59afaadbcb0ae6ced21185ef1475b20f7dc
repoTags:
- localhost/minikube-local-cache-test:functional-647968
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-647968 image ls --format yaml --alsologtostderr:
I0610 10:37:24.259944   21416 out.go:291] Setting OutFile to fd 1 ...
I0610 10:37:24.260184   21416 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:37:24.260194   21416 out.go:304] Setting ErrFile to fd 2...
I0610 10:37:24.260200   21416 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:37:24.260379   21416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
I0610 10:37:24.260931   21416 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0610 10:37:24.261068   21416 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0610 10:37:24.261475   21416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0610 10:37:24.261532   21416 main.go:141] libmachine: Launching plugin server for driver kvm2
I0610 10:37:24.276488   21416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40491
I0610 10:37:24.276970   21416 main.go:141] libmachine: () Calling .GetVersion
I0610 10:37:24.277526   21416 main.go:141] libmachine: Using API Version  1
I0610 10:37:24.277551   21416 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 10:37:24.277920   21416 main.go:141] libmachine: () Calling .GetMachineName
I0610 10:37:24.278102   21416 main.go:141] libmachine: (functional-647968) Calling .GetState
I0610 10:37:24.280179   21416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0610 10:37:24.280229   21416 main.go:141] libmachine: Launching plugin server for driver kvm2
I0610 10:37:24.295132   21416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38733
I0610 10:37:24.295528   21416 main.go:141] libmachine: () Calling .GetVersion
I0610 10:37:24.295937   21416 main.go:141] libmachine: Using API Version  1
I0610 10:37:24.295956   21416 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 10:37:24.296275   21416 main.go:141] libmachine: () Calling .GetMachineName
I0610 10:37:24.296454   21416 main.go:141] libmachine: (functional-647968) Calling .DriverName
I0610 10:37:24.296658   21416 ssh_runner.go:195] Run: systemctl --version
I0610 10:37:24.296679   21416 main.go:141] libmachine: (functional-647968) Calling .GetSSHHostname
I0610 10:37:24.299210   21416 main.go:141] libmachine: (functional-647968) DBG | domain functional-647968 has defined MAC address 52:54:00:56:75:47 in network mk-functional-647968
I0610 10:37:24.299599   21416 main.go:141] libmachine: (functional-647968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:75:47", ip: ""} in network mk-functional-647968: {Iface:virbr1 ExpiryTime:2024-06-10 11:34:22 +0000 UTC Type:0 Mac:52:54:00:56:75:47 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:functional-647968 Clientid:01:52:54:00:56:75:47}
I0610 10:37:24.299643   21416 main.go:141] libmachine: (functional-647968) DBG | domain functional-647968 has defined IP address 192.168.39.252 and MAC address 52:54:00:56:75:47 in network mk-functional-647968
I0610 10:37:24.299730   21416 main.go:141] libmachine: (functional-647968) Calling .GetSSHPort
I0610 10:37:24.299903   21416 main.go:141] libmachine: (functional-647968) Calling .GetSSHKeyPath
I0610 10:37:24.300058   21416 main.go:141] libmachine: (functional-647968) Calling .GetSSHUsername
I0610 10:37:24.300202   21416 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/functional-647968/id_rsa Username:docker}
I0610 10:37:24.415211   21416 ssh_runner.go:195] Run: sudo crictl images --output json
I0610 10:37:24.492994   21416 main.go:141] libmachine: Making call to close driver server
I0610 10:37:24.493014   21416 main.go:141] libmachine: (functional-647968) Calling .Close
I0610 10:37:24.493333   21416 main.go:141] libmachine: Successfully made call to close driver server
I0610 10:37:24.493353   21416 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 10:37:24.493362   21416 main.go:141] libmachine: Making call to close driver server
I0610 10:37:24.493371   21416 main.go:141] libmachine: (functional-647968) Calling .Close
I0610 10:37:24.493642   21416 main.go:141] libmachine: (functional-647968) DBG | Closing plugin on server side
I0610 10:37:24.493658   21416 main.go:141] libmachine: Successfully made call to close driver server
I0610 10:37:24.493684   21416 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-647968 ssh pgrep buildkitd: exit status 1 (241.107254ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image build -t localhost/my-image:functional-647968 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 image build -t localhost/my-image:functional-647968 testdata/build --alsologtostderr: (7.102235754s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-647968 image build -t localhost/my-image:functional-647968 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b30f9b1bd29
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-647968
--> bb7e9d75d72
Successfully tagged localhost/my-image:functional-647968
bb7e9d75d725802c0304b8fe9bbdb1727a37ef333f3af8cef68a686292ff3873
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-647968 image build -t localhost/my-image:functional-647968 testdata/build --alsologtostderr:
I0610 10:37:24.780444   21470 out.go:291] Setting OutFile to fd 1 ...
I0610 10:37:24.780562   21470 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:37:24.780570   21470 out.go:304] Setting ErrFile to fd 2...
I0610 10:37:24.780575   21470 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0610 10:37:24.780750   21470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
I0610 10:37:24.781292   21470 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0610 10:37:24.781698   21470 config.go:182] Loaded profile config "functional-647968": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0610 10:37:24.782043   21470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0610 10:37:24.782083   21470 main.go:141] libmachine: Launching plugin server for driver kvm2
I0610 10:37:24.796847   21470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38871
I0610 10:37:24.797343   21470 main.go:141] libmachine: () Calling .GetVersion
I0610 10:37:24.797845   21470 main.go:141] libmachine: Using API Version  1
I0610 10:37:24.797863   21470 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 10:37:24.798205   21470 main.go:141] libmachine: () Calling .GetMachineName
I0610 10:37:24.798421   21470 main.go:141] libmachine: (functional-647968) Calling .GetState
I0610 10:37:24.800521   21470 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0610 10:37:24.800573   21470 main.go:141] libmachine: Launching plugin server for driver kvm2
I0610 10:37:24.815354   21470 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
I0610 10:37:24.815933   21470 main.go:141] libmachine: () Calling .GetVersion
I0610 10:37:24.817607   21470 main.go:141] libmachine: Using API Version  1
I0610 10:37:24.817788   21470 main.go:141] libmachine: () Calling .SetConfigRaw
I0610 10:37:24.818245   21470 main.go:141] libmachine: () Calling .GetMachineName
I0610 10:37:24.818447   21470 main.go:141] libmachine: (functional-647968) Calling .DriverName
I0610 10:37:24.818680   21470 ssh_runner.go:195] Run: systemctl --version
I0610 10:37:24.818705   21470 main.go:141] libmachine: (functional-647968) Calling .GetSSHHostname
I0610 10:37:24.821350   21470 main.go:141] libmachine: (functional-647968) DBG | domain functional-647968 has defined MAC address 52:54:00:56:75:47 in network mk-functional-647968
I0610 10:37:24.821718   21470 main.go:141] libmachine: (functional-647968) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:75:47", ip: ""} in network mk-functional-647968: {Iface:virbr1 ExpiryTime:2024-06-10 11:34:22 +0000 UTC Type:0 Mac:52:54:00:56:75:47 Iaid: IPaddr:192.168.39.252 Prefix:24 Hostname:functional-647968 Clientid:01:52:54:00:56:75:47}
I0610 10:37:24.821747   21470 main.go:141] libmachine: (functional-647968) DBG | domain functional-647968 has defined IP address 192.168.39.252 and MAC address 52:54:00:56:75:47 in network mk-functional-647968
I0610 10:37:24.821861   21470 main.go:141] libmachine: (functional-647968) Calling .GetSSHPort
I0610 10:37:24.822042   21470 main.go:141] libmachine: (functional-647968) Calling .GetSSHKeyPath
I0610 10:37:24.822184   21470 main.go:141] libmachine: (functional-647968) Calling .GetSSHUsername
I0610 10:37:24.822325   21470 sshutil.go:53] new ssh client: &{IP:192.168.39.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/functional-647968/id_rsa Username:docker}
I0610 10:37:24.959748   21470 build_images.go:161] Building image from path: /tmp/build.309738542.tar
I0610 10:37:24.959802   21470 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0610 10:37:24.986108   21470 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.309738542.tar
I0610 10:37:24.994844   21470 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.309738542.tar: stat -c "%s %y" /var/lib/minikube/build/build.309738542.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.309738542.tar': No such file or directory
I0610 10:37:24.994876   21470 ssh_runner.go:362] scp /tmp/build.309738542.tar --> /var/lib/minikube/build/build.309738542.tar (3072 bytes)
I0610 10:37:25.024373   21470 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.309738542
I0610 10:37:25.036657   21470 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.309738542 -xf /var/lib/minikube/build/build.309738542.tar
I0610 10:37:25.051557   21470 crio.go:315] Building image: /var/lib/minikube/build/build.309738542
I0610 10:37:25.051615   21470 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-647968 /var/lib/minikube/build/build.309738542 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0610 10:37:31.723763   21470 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-647968 /var/lib/minikube/build/build.309738542 --cgroup-manager=cgroupfs: (6.672122102s)
I0610 10:37:31.723823   21470 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.309738542
I0610 10:37:31.752929   21470 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.309738542.tar
I0610 10:37:31.763621   21470 build_images.go:217] Built localhost/my-image:functional-647968 from /tmp/build.309738542.tar
I0610 10:37:31.763659   21470 build_images.go:133] succeeded building to: functional-647968
I0610 10:37:31.763665   21470 build_images.go:134] failed building to: 
I0610 10:37:31.763757   21470 main.go:141] libmachine: Making call to close driver server
I0610 10:37:31.763781   21470 main.go:141] libmachine: (functional-647968) Calling .Close
I0610 10:37:31.764146   21470 main.go:141] libmachine: (functional-647968) DBG | Closing plugin on server side
I0610 10:37:31.764161   21470 main.go:141] libmachine: Successfully made call to close driver server
I0610 10:37:31.764176   21470 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 10:37:31.764194   21470 main.go:141] libmachine: Making call to close driver server
I0610 10:37:31.764203   21470 main.go:141] libmachine: (functional-647968) Calling .Close
I0610 10:37:31.764454   21470 main.go:141] libmachine: Successfully made call to close driver server
I0610 10:37:31.764469   21470 main.go:141] libmachine: Making call to close connection to plugin binary
I0610 10:37:31.764478   21470 main.go:141] libmachine: (functional-647968) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.929359484s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-647968
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-647968 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-647968 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-x2c2p" [d8085da3-ea05-419c-869e-dae7a1537fd7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-x2c2p" [d8085da3-ea05-419c-869e-dae7a1537fd7] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004498343s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image load --daemon gcr.io/google-containers/addon-resizer:functional-647968 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 image load --daemon gcr.io/google-containers/addon-resizer:functional-647968 --alsologtostderr: (4.010394586s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image load --daemon gcr.io/google-containers/addon-resizer:functional-647968 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 image load --daemon gcr.io/google-containers/addon-resizer:functional-647968 --alsologtostderr: (2.440634286s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.821018352s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-647968
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image load --daemon gcr.io/google-containers/addon-resizer:functional-647968 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 image load --daemon gcr.io/google-containers/addon-resizer:functional-647968 --alsologtostderr: (6.266685588s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 service list -o json
functional_test.go:1490: Took "370.197999ms" to run "out/minikube-linux-amd64 -p functional-647968 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.252:31824
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.252:31824
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "324.212714ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "54.491572ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-647968 /tmp/TestFunctionalparallelMountCmdany-port1394966672/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1718015830653118152" to /tmp/TestFunctionalparallelMountCmdany-port1394966672/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1718015830653118152" to /tmp/TestFunctionalparallelMountCmdany-port1394966672/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1718015830653118152" to /tmp/TestFunctionalparallelMountCmdany-port1394966672/001/test-1718015830653118152
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-647968 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.823487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 10 10:37 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 10 10:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 10 10:37 test-1718015830653118152
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh cat /mount-9p/test-1718015830653118152
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-647968 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8f3c1708-109c-4e68-bd44-be90e802949c] Pending
helpers_test.go:344: "busybox-mount" [8f3c1708-109c-4e68-bd44-be90e802949c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8f3c1708-109c-4e68-bd44-be90e802949c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8f3c1708-109c-4e68-bd44-be90e802949c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003670231s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-647968 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-647968 /tmp/TestFunctionalparallelMountCmdany-port1394966672/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.91s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "270.693067ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "44.31853ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image save gcr.io/google-containers/addon-resizer:functional-647968 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 image save gcr.io/google-containers/addon-resizer:functional-647968 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.069092058s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image rm gcr.io/google-containers/addon-resizer:functional-647968 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.785678239s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-647968
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 image save --daemon gcr.io/google-containers/addon-resizer:functional-647968 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-647968 image save --daemon gcr.io/google-containers/addon-resizer:functional-647968 --alsologtostderr: (1.410499936s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-647968
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-647968 /tmp/TestFunctionalparallelMountCmdspecific-port2963321019/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-647968 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (313.255968ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-647968 /tmp/TestFunctionalparallelMountCmdspecific-port2963321019/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-647968 ssh "sudo umount -f /mount-9p": exit status 1 (264.446998ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-647968 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-647968 /tmp/TestFunctionalparallelMountCmdspecific-port2963321019/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-647968 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576890524/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-647968 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576890524/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-647968 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576890524/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-647968 ssh "findmnt -T" /mount1: exit status 1 (315.845849ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-647968 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-647968 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-647968 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576890524/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-647968 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576890524/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-647968 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1576890524/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-647968
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-647968
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-647968
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-565925 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0610 10:39:12.453427   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 10:39:40.137096   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-565925 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m29.114586474s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (209.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-565925 -- rollout status deployment/busybox: (4.442736504s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-6wmkd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-8g67g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-jmbg2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-6wmkd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-8g67g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-jmbg2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-6wmkd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-8g67g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-jmbg2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-6wmkd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-6wmkd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-8g67g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-8g67g -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-jmbg2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-565925 -- exec busybox-fc5497c4f-jmbg2 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-565925 -v=7 --alsologtostderr
E0610 10:41:57.914180   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:41:57.919474   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:41:57.929792   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:41:57.950116   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:41:57.990464   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:41:58.070796   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:41:58.231255   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:41:58.551837   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:41:59.192319   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:42:00.473070   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:42:03.033743   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 10:42:08.153995   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-565925 -v=7 --alsologtostderr: (46.987337068s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-565925 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp testdata/cp-test.txt ha-565925:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925 "sudo cat /home/docker/cp-test.txt"
E0610 10:42:18.395198   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1107448961/001/cp-test_ha-565925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925:/home/docker/cp-test.txt ha-565925-m02:/home/docker/cp-test_ha-565925_ha-565925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m02 "sudo cat /home/docker/cp-test_ha-565925_ha-565925-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925:/home/docker/cp-test.txt ha-565925-m03:/home/docker/cp-test_ha-565925_ha-565925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m03 "sudo cat /home/docker/cp-test_ha-565925_ha-565925-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925:/home/docker/cp-test.txt ha-565925-m04:/home/docker/cp-test_ha-565925_ha-565925-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m04 "sudo cat /home/docker/cp-test_ha-565925_ha-565925-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp testdata/cp-test.txt ha-565925-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1107448961/001/cp-test_ha-565925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m02:/home/docker/cp-test.txt ha-565925:/home/docker/cp-test_ha-565925-m02_ha-565925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925 "sudo cat /home/docker/cp-test_ha-565925-m02_ha-565925.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m02:/home/docker/cp-test.txt ha-565925-m03:/home/docker/cp-test_ha-565925-m02_ha-565925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m03 "sudo cat /home/docker/cp-test_ha-565925-m02_ha-565925-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m02:/home/docker/cp-test.txt ha-565925-m04:/home/docker/cp-test_ha-565925-m02_ha-565925-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m04 "sudo cat /home/docker/cp-test_ha-565925-m02_ha-565925-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp testdata/cp-test.txt ha-565925-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1107448961/001/cp-test_ha-565925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt ha-565925:/home/docker/cp-test_ha-565925-m03_ha-565925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925 "sudo cat /home/docker/cp-test_ha-565925-m03_ha-565925.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt ha-565925-m02:/home/docker/cp-test_ha-565925-m03_ha-565925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m02 "sudo cat /home/docker/cp-test_ha-565925-m03_ha-565925-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m03:/home/docker/cp-test.txt ha-565925-m04:/home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m04 "sudo cat /home/docker/cp-test_ha-565925-m03_ha-565925-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp testdata/cp-test.txt ha-565925-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1107448961/001/cp-test_ha-565925-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt ha-565925:/home/docker/cp-test_ha-565925-m04_ha-565925.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925 "sudo cat /home/docker/cp-test_ha-565925-m04_ha-565925.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt ha-565925-m02:/home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m02 "sudo cat /home/docker/cp-test_ha-565925-m04_ha-565925-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 cp ha-565925-m04:/home/docker/cp-test.txt ha-565925-m03:/home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 ssh -n ha-565925-m03 "sudo cat /home/docker/cp-test_ha-565925-m04_ha-565925-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.482723353s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-565925 node delete m03 -v=7 --alsologtostderr: (16.354060021s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-565925 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestJSONOutput/start/Command (96.32s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-737529 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0610 11:09:12.453491   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-737529 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m36.315033624s)
--- PASS: TestJSONOutput/start/Command (96.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-737529 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-737529 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-737529 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-737529 --output=json --user=testUser: (7.355385926s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-482439 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-482439 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.960442ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"83eb7bd6-b437-4314-a63d-d56d796c2a89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-482439] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2fffe066-c9f4-4b3d-aa5c-54d5402257a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19046"}}
	{"specversion":"1.0","id":"eff7028a-7476-4a50-9aef-7abf026d92d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"31b206a1-f2ce-4a52-9186-d13213d8fddf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig"}}
	{"specversion":"1.0","id":"e633a3e4-d23f-43d3-b34d-b026caf6760a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube"}}
	{"specversion":"1.0","id":"ccea8ce1-7d6d-494e-af12-fbf9fcb8d01c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"07e13e6c-31a9-4f9b-9bd7-956f267b96cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"28a3c1a1-e31a-44cc-98b8-459940f63791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-482439" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-482439
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (81.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-474670 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-474670 --driver=kvm2  --container-runtime=crio: (41.660998363s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-477365 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-477365 --driver=kvm2  --container-runtime=crio: (37.566736336s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-474670
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-477365
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-477365" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-477365
helpers_test.go:175: Cleaning up "first-474670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-474670
--- PASS: TestMinikubeProfile (81.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-339300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-339300 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.275686195s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-339300 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-339300 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-352017 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-352017 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.57370956s)
E0610 11:11:57.913617   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountSecond (27.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-352017 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-352017 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-339300 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-352017 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-352017 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-352017
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-352017: (1.264776244s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-352017
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-352017: (21.399972192s)
--- PASS: TestMountStart/serial/RestartStopped (22.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-352017 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-352017 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (99.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-862380 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-862380 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m39.368297399s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (99.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-862380 -- rollout status deployment/busybox: (3.860943916s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- exec busybox-fc5497c4f-jx8f9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- exec busybox-fc5497c4f-n6zqh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- exec busybox-fc5497c4f-jx8f9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- exec busybox-fc5497c4f-n6zqh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- exec busybox-fc5497c4f-jx8f9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- exec busybox-fc5497c4f-n6zqh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- exec busybox-fc5497c4f-jx8f9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- exec busybox-fc5497c4f-jx8f9 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- exec busybox-fc5497c4f-n6zqh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-862380 -- exec busybox-fc5497c4f-n6zqh -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (37.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-862380 -v 3 --alsologtostderr
E0610 11:14:12.453446   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-862380 -v 3 --alsologtostderr: (36.508947402s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (37.06s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-862380 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp testdata/cp-test.txt multinode-862380:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp multinode-862380:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4163337793/001/cp-test_multinode-862380.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp multinode-862380:/home/docker/cp-test.txt multinode-862380-m02:/home/docker/cp-test_multinode-862380_multinode-862380-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m02 "sudo cat /home/docker/cp-test_multinode-862380_multinode-862380-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp multinode-862380:/home/docker/cp-test.txt multinode-862380-m03:/home/docker/cp-test_multinode-862380_multinode-862380-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m03 "sudo cat /home/docker/cp-test_multinode-862380_multinode-862380-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp testdata/cp-test.txt multinode-862380-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp multinode-862380-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4163337793/001/cp-test_multinode-862380-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp multinode-862380-m02:/home/docker/cp-test.txt multinode-862380:/home/docker/cp-test_multinode-862380-m02_multinode-862380.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380 "sudo cat /home/docker/cp-test_multinode-862380-m02_multinode-862380.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp multinode-862380-m02:/home/docker/cp-test.txt multinode-862380-m03:/home/docker/cp-test_multinode-862380-m02_multinode-862380-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m03 "sudo cat /home/docker/cp-test_multinode-862380-m02_multinode-862380-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp testdata/cp-test.txt multinode-862380-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp multinode-862380-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4163337793/001/cp-test_multinode-862380-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp multinode-862380-m03:/home/docker/cp-test.txt multinode-862380:/home/docker/cp-test_multinode-862380-m03_multinode-862380.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380 "sudo cat /home/docker/cp-test_multinode-862380-m03_multinode-862380.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 cp multinode-862380-m03:/home/docker/cp-test.txt multinode-862380-m02:/home/docker/cp-test_multinode-862380-m03_multinode-862380-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 ssh -n multinode-862380-m02 "sudo cat /home/docker/cp-test_multinode-862380-m03_multinode-862380-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-862380 node stop m03: (1.392395976s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-862380 status: exit status 7 (422.439347ms)

                                                
                                                
-- stdout --
	multinode-862380
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-862380-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-862380-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-862380 status --alsologtostderr: exit status 7 (433.79815ms)

                                                
                                                
-- stdout --
	multinode-862380
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-862380-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-862380-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 11:14:56.917989   39875 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:14:56.918102   39875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:14:56.918112   39875 out.go:304] Setting ErrFile to fd 2...
	I0610 11:14:56.918118   39875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:14:56.918298   39875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:14:56.918468   39875 out.go:298] Setting JSON to false
	I0610 11:14:56.918494   39875 mustload.go:65] Loading cluster: multinode-862380
	I0610 11:14:56.918595   39875 notify.go:220] Checking for updates...
	I0610 11:14:56.918899   39875 config.go:182] Loaded profile config "multinode-862380": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:14:56.918916   39875 status.go:255] checking status of multinode-862380 ...
	I0610 11:14:56.919378   39875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:14:56.919453   39875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:14:56.938302   39875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38865
	I0610 11:14:56.938767   39875 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:14:56.939292   39875 main.go:141] libmachine: Using API Version  1
	I0610 11:14:56.939341   39875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:14:56.939719   39875 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:14:56.939932   39875 main.go:141] libmachine: (multinode-862380) Calling .GetState
	I0610 11:14:56.941487   39875 status.go:330] multinode-862380 host status = "Running" (err=<nil>)
	I0610 11:14:56.941508   39875 host.go:66] Checking if "multinode-862380" exists ...
	I0610 11:14:56.941791   39875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:14:56.941825   39875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:14:56.956613   39875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I0610 11:14:56.957019   39875 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:14:56.957536   39875 main.go:141] libmachine: Using API Version  1
	I0610 11:14:56.957569   39875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:14:56.957908   39875 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:14:56.958099   39875 main.go:141] libmachine: (multinode-862380) Calling .GetIP
	I0610 11:14:56.961332   39875 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:14:56.961785   39875 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:14:56.961811   39875 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:14:56.961974   39875 host.go:66] Checking if "multinode-862380" exists ...
	I0610 11:14:56.962271   39875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:14:56.962313   39875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:14:56.979511   39875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37357
	I0610 11:14:56.979968   39875 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:14:56.980444   39875 main.go:141] libmachine: Using API Version  1
	I0610 11:14:56.980466   39875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:14:56.980761   39875 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:14:56.980984   39875 main.go:141] libmachine: (multinode-862380) Calling .DriverName
	I0610 11:14:56.981179   39875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 11:14:56.981216   39875 main.go:141] libmachine: (multinode-862380) Calling .GetSSHHostname
	I0610 11:14:56.984135   39875 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:14:56.984538   39875 main.go:141] libmachine: (multinode-862380) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:87:87", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:12:38 +0000 UTC Type:0 Mac:52:54:00:08:87:87 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:multinode-862380 Clientid:01:52:54:00:08:87:87}
	I0610 11:14:56.984568   39875 main.go:141] libmachine: (multinode-862380) DBG | domain multinode-862380 has defined IP address 192.168.39.100 and MAC address 52:54:00:08:87:87 in network mk-multinode-862380
	I0610 11:14:56.984715   39875 main.go:141] libmachine: (multinode-862380) Calling .GetSSHPort
	I0610 11:14:56.984890   39875 main.go:141] libmachine: (multinode-862380) Calling .GetSSHKeyPath
	I0610 11:14:56.985039   39875 main.go:141] libmachine: (multinode-862380) Calling .GetSSHUsername
	I0610 11:14:56.985189   39875 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/multinode-862380/id_rsa Username:docker}
	I0610 11:14:57.071983   39875 ssh_runner.go:195] Run: systemctl --version
	I0610 11:14:57.077956   39875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:14:57.093506   39875 kubeconfig.go:125] found "multinode-862380" server: "https://192.168.39.100:8443"
	I0610 11:14:57.093541   39875 api_server.go:166] Checking apiserver status ...
	I0610 11:14:57.093584   39875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 11:14:57.112459   39875 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1142/cgroup
	W0610 11:14:57.124267   39875 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1142/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0610 11:14:57.124315   39875 ssh_runner.go:195] Run: ls
	I0610 11:14:57.129032   39875 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0610 11:14:57.134759   39875 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0610 11:14:57.134785   39875 status.go:422] multinode-862380 apiserver status = Running (err=<nil>)
	I0610 11:14:57.134794   39875 status.go:257] multinode-862380 status: &{Name:multinode-862380 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0610 11:14:57.134810   39875 status.go:255] checking status of multinode-862380-m02 ...
	I0610 11:14:57.135067   39875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:14:57.135103   39875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:14:57.150003   39875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34161
	I0610 11:14:57.150451   39875 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:14:57.150940   39875 main.go:141] libmachine: Using API Version  1
	I0610 11:14:57.150970   39875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:14:57.151277   39875 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:14:57.151468   39875 main.go:141] libmachine: (multinode-862380-m02) Calling .GetState
	I0610 11:14:57.152826   39875 status.go:330] multinode-862380-m02 host status = "Running" (err=<nil>)
	I0610 11:14:57.152844   39875 host.go:66] Checking if "multinode-862380-m02" exists ...
	I0610 11:14:57.153276   39875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:14:57.153324   39875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:14:57.168098   39875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34163
	I0610 11:14:57.168521   39875 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:14:57.169014   39875 main.go:141] libmachine: Using API Version  1
	I0610 11:14:57.169036   39875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:14:57.169308   39875 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:14:57.169483   39875 main.go:141] libmachine: (multinode-862380-m02) Calling .GetIP
	I0610 11:14:57.172443   39875 main.go:141] libmachine: (multinode-862380-m02) DBG | domain multinode-862380-m02 has defined MAC address 52:54:00:a6:09:52 in network mk-multinode-862380
	I0610 11:14:57.172906   39875 main.go:141] libmachine: (multinode-862380-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:09:52", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:13:38 +0000 UTC Type:0 Mac:52:54:00:a6:09:52 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:multinode-862380-m02 Clientid:01:52:54:00:a6:09:52}
	I0610 11:14:57.172934   39875 main.go:141] libmachine: (multinode-862380-m02) DBG | domain multinode-862380-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:a6:09:52 in network mk-multinode-862380
	I0610 11:14:57.173108   39875 host.go:66] Checking if "multinode-862380-m02" exists ...
	I0610 11:14:57.173399   39875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:14:57.173447   39875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:14:57.188463   39875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33185
	I0610 11:14:57.188828   39875 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:14:57.189288   39875 main.go:141] libmachine: Using API Version  1
	I0610 11:14:57.189317   39875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:14:57.189604   39875 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:14:57.189776   39875 main.go:141] libmachine: (multinode-862380-m02) Calling .DriverName
	I0610 11:14:57.189932   39875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0610 11:14:57.189964   39875 main.go:141] libmachine: (multinode-862380-m02) Calling .GetSSHHostname
	I0610 11:14:57.192691   39875 main.go:141] libmachine: (multinode-862380-m02) DBG | domain multinode-862380-m02 has defined MAC address 52:54:00:a6:09:52 in network mk-multinode-862380
	I0610 11:14:57.193113   39875 main.go:141] libmachine: (multinode-862380-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:09:52", ip: ""} in network mk-multinode-862380: {Iface:virbr1 ExpiryTime:2024-06-10 12:13:38 +0000 UTC Type:0 Mac:52:54:00:a6:09:52 Iaid: IPaddr:192.168.39.47 Prefix:24 Hostname:multinode-862380-m02 Clientid:01:52:54:00:a6:09:52}
	I0610 11:14:57.193149   39875 main.go:141] libmachine: (multinode-862380-m02) DBG | domain multinode-862380-m02 has defined IP address 192.168.39.47 and MAC address 52:54:00:a6:09:52 in network mk-multinode-862380
	I0610 11:14:57.193296   39875 main.go:141] libmachine: (multinode-862380-m02) Calling .GetSSHPort
	I0610 11:14:57.193449   39875 main.go:141] libmachine: (multinode-862380-m02) Calling .GetSSHKeyPath
	I0610 11:14:57.193575   39875 main.go:141] libmachine: (multinode-862380-m02) Calling .GetSSHUsername
	I0610 11:14:57.193729   39875 sshutil.go:53] new ssh client: &{IP:192.168.39.47 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19046-3880/.minikube/machines/multinode-862380-m02/id_rsa Username:docker}
	I0610 11:14:57.275612   39875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 11:14:57.290883   39875 status.go:257] multinode-862380-m02 status: &{Name:multinode-862380-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0610 11:14:57.290919   39875 status.go:255] checking status of multinode-862380-m03 ...
	I0610 11:14:57.291271   39875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0610 11:14:57.291320   39875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0610 11:14:57.307568   39875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0610 11:14:57.308006   39875 main.go:141] libmachine: () Calling .GetVersion
	I0610 11:14:57.308493   39875 main.go:141] libmachine: Using API Version  1
	I0610 11:14:57.308514   39875 main.go:141] libmachine: () Calling .SetConfigRaw
	I0610 11:14:57.308799   39875 main.go:141] libmachine: () Calling .GetMachineName
	I0610 11:14:57.309047   39875 main.go:141] libmachine: (multinode-862380-m03) Calling .GetState
	I0610 11:14:57.310418   39875 status.go:330] multinode-862380-m03 host status = "Stopped" (err=<nil>)
	I0610 11:14:57.310429   39875 status.go:343] host is not running, skipping remaining checks
	I0610 11:14:57.310436   39875 status.go:257] multinode-862380-m03 status: &{Name:multinode-862380-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 node start m03 -v=7 --alsologtostderr
E0610 11:15:00.959889   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-862380 node start m03 -v=7 --alsologtostderr: (27.873477564s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-862380 node delete m03: (1.742779932s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (172.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-862380 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0610 11:23:55.501889   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
E0610 11:24:12.453775   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-862380 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m52.436883115s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-862380 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (172.99s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-862380
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-862380-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-862380-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.351142ms)

                                                
                                                
-- stdout --
	* [multinode-862380-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-862380-m02' is duplicated with machine name 'multinode-862380-m02' in profile 'multinode-862380'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-862380-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-862380-m03 --driver=kvm2  --container-runtime=crio: (43.266656603s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-862380
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-862380: exit status 80 (207.714389ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-862380 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-862380-m03 already exists in multinode-862380-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-862380-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.53s)

                                                
                                    
x
+
TestScheduledStopUnix (111.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-211714 --memory=2048 --driver=kvm2  --container-runtime=crio
E0610 11:31:40.960766   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-211714 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.436762212s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-211714 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-211714 -n scheduled-stop-211714
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-211714 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-211714 --cancel-scheduled
E0610 11:31:57.914159   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-211714 -n scheduled-stop-211714
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-211714
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-211714 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-211714
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-211714: exit status 7 (62.376232ms)

                                                
                                                
-- stdout --
	scheduled-stop-211714
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-211714 -n scheduled-stop-211714
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-211714 -n scheduled-stop-211714: exit status 7 (64.665175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-211714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-211714
--- PASS: TestScheduledStopUnix (111.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (242.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1507472423 start -p running-upgrade-130010 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1507472423 start -p running-upgrade-130010 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m1.921339471s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-130010 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-130010 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m56.474137351s)
helpers_test.go:175: Cleaning up "running-upgrade-130010" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-130010
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-130010: (1.164801793s)
--- PASS: TestRunningBinaryUpgrade (242.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-103815 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-103815 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (82.660902ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-103815] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (68.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-103815 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-103815 --driver=kvm2  --container-runtime=crio: (1m8.020686573s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-103815 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (68.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-491653 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-491653 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (108.970177ms)

                                                
                                                
-- stdout --
	* [false-491653] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19046
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 11:33:01.529973   47755 out.go:291] Setting OutFile to fd 1 ...
	I0610 11:33:01.530229   47755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:33:01.530239   47755 out.go:304] Setting ErrFile to fd 2...
	I0610 11:33:01.530245   47755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0610 11:33:01.530460   47755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19046-3880/.minikube/bin
	I0610 11:33:01.531072   47755 out.go:298] Setting JSON to false
	I0610 11:33:01.532034   47755 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4523,"bootTime":1718014659,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0610 11:33:01.532091   47755 start.go:139] virtualization: kvm guest
	I0610 11:33:01.534453   47755 out.go:177] * [false-491653] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0610 11:33:01.535919   47755 out.go:177]   - MINIKUBE_LOCATION=19046
	I0610 11:33:01.535966   47755 notify.go:220] Checking for updates...
	I0610 11:33:01.537331   47755 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 11:33:01.539079   47755 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19046-3880/kubeconfig
	I0610 11:33:01.540587   47755 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19046-3880/.minikube
	I0610 11:33:01.541991   47755 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0610 11:33:01.543358   47755 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 11:33:01.545025   47755 config.go:182] Loaded profile config "NoKubernetes-103815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:33:01.545147   47755 config.go:182] Loaded profile config "offline-crio-079649": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0610 11:33:01.545254   47755 driver.go:392] Setting default libvirt URI to qemu:///system
	I0610 11:33:01.581216   47755 out.go:177] * Using the kvm2 driver based on user configuration
	I0610 11:33:01.582921   47755 start.go:297] selected driver: kvm2
	I0610 11:33:01.582942   47755 start.go:901] validating driver "kvm2" against <nil>
	I0610 11:33:01.582957   47755 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 11:33:01.585479   47755 out.go:177] 
	W0610 11:33:01.586634   47755 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0610 11:33:01.587929   47755 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-491653 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-491653" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-491653" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-491653

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-491653"

                                                
                                                
----------------------- debugLogs end: false-491653 [took: 2.758610481s] --------------------------------
helpers_test.go:175: Cleaning up "false-491653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-491653
--- PASS: TestNetworkPlugins/group/false (3.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (50.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-103815 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0610 11:34:12.453691   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-103815 --no-kubernetes --driver=kvm2  --container-runtime=crio: (49.70447294s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-103815 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-103815 status -o json: exit status 2 (224.087155ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-103815","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-103815
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (50.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (57.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-103815 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-103815 --no-kubernetes --driver=kvm2  --container-runtime=crio: (57.791477672s)
--- PASS: TestNoKubernetes/serial/Start (57.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-103815 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-103815 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.432411ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.203302588s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-103815
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-103815: (1.297118112s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (46.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-103815 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-103815 --driver=kvm2  --container-runtime=crio: (46.202441286s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (46.20s)

                                                
                                    
x
+
TestPause/serial/Start (63.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-761253 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-761253 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m3.381247441s)
--- PASS: TestPause/serial/Start (63.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-103815 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-103815 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.39692ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2623089891 start -p stopped-upgrade-161665 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2623089891 start -p stopped-upgrade-161665 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m21.949677389s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2623089891 -p stopped-upgrade-161665 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2623089891 -p stopped-upgrade-161665 stop: (1.431389048s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-161665 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-161665 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.461114832s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.84s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.9s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-761253 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-761253 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.8700133s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (53.90s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-761253 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-761253 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-761253 --output=json --layout=cluster: exit status 2 (245.959541ms)

                                                
                                                
-- stdout --
	{"Name":"pause-761253","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-761253","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-761253 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-761253 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-761253 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-761253 --alsologtostderr -v=5: (1.006782549s)
--- PASS: TestPause/serial/DeletePaused (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.527600751s)
--- PASS: TestPause/serial/VerifyDeletedResources (14.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-161665
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-832735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0610 11:39:12.453402   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-832735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (1m9.077060762s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (129.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-298179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-298179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (2m9.373018094s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (129.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-832735 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f5a24d2e-a638-4a3c-bd49-8c6f5c07b55b] Pending
helpers_test.go:344: "busybox" [f5a24d2e-a638-4a3c-bd49-8c6f5c07b55b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f5a24d2e-a638-4a3c-bd49-8c6f5c07b55b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004443728s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-832735 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-832735 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-832735 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-298179 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5b5407fc-95cd-43ef-b297-9ac8350ffffc] Pending
helpers_test.go:344: "busybox" [5b5407fc-95cd-43ef-b297-9ac8350ffffc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5b5407fc-95cd-43ef-b297-9ac8350ffffc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004656123s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-298179 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-298179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-298179 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (635.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-832735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-832735 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (10m35.622906167s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832735 -n embed-certs-832735
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (635.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (303.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-281114 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-281114 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (5m3.083835181s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (303.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (574.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-298179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-298179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (9m34.010809556s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-298179 -n no-preload-298179
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (574.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-166693 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-166693 --alsologtostderr -v=3: (6.293287399s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-166693 -n old-k8s-version-166693: exit status 7 (63.713022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-166693 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-281114 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2f79bfd5-0b14-40bc-82b2-59100857105d] Pending
helpers_test.go:344: "busybox" [2f79bfd5-0b14-40bc-82b2-59100857105d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0610 11:49:12.453107   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2f79bfd5-0b14-40bc-82b2-59100857105d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004359984s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-281114 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-281114 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-281114 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (627.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-281114 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0610 11:51:57.913680   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-281114 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (10m27.289370052s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-281114 -n default-k8s-diff-port-281114
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (627.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (55.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-003554 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-003554 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (55.773060523s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (55.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (113.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E0610 12:09:12.453155   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/addons-021732/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m53.969754134s)
--- PASS: TestNetworkPlugins/group/auto/Start (113.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m27.725042738s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-003554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-003554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.198173404s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-003554 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-003554 --alsologtostderr -v=3: (8.369901285s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-003554 -n newest-cni-003554
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-003554 -n newest-cni-003554: exit status 7 (65.749526ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-003554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (43.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-003554 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-003554 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1: (43.70001498s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-003554 -n newest-cni-003554
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (43.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-003554 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-003554 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-003554 -n newest-cni-003554
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-003554 -n newest-cni-003554: exit status 2 (244.189254ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-003554 -n newest-cni-003554
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-003554 -n newest-cni-003554: exit status 2 (251.436928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-003554 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-003554 -n newest-cni-003554
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-003554 -n newest-cni-003554
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-v6b78" [5e67cc44-a0dd-4ba3-853b-ea06b0fbe32e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005134543s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (91.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m31.577978658s)
--- PASS: TestNetworkPlugins/group/calico/Start (91.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-491653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-491653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-wlq9c" [f4d67130-7adb-4a85-8a59-91929837e813] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-wlq9c" [f4d67130-7adb-4a85-8a59-91929837e813] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003435974s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-491653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-491653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k7rx2" [d2206f55-26b7-4020-bff8-6065f252caa2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-k7rx2" [d2206f55-26b7-4020-bff8-6065f252caa2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.0037714s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-491653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-491653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0610 12:11:31.955972   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
E0610 12:11:31.961409   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
E0610 12:11:31.971660   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
E0610 12:11:31.991948   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
E0610 12:11:32.032338   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
E0610 12:11:32.113036   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
E0610 12:11:32.273276   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
E0610 12:11:32.593659   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m39.004591621s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (99.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (120.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0610 12:11:34.515059   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
E0610 12:11:37.075948   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
E0610 12:11:42.196498   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
E0610 12:11:49.515749   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:11:49.521001   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:11:49.531255   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:11:49.551608   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:11:49.591891   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:11:49.672269   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:11:49.832694   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:11:50.153011   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:11:50.793360   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:11:52.074220   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:11:52.436685   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
E0610 12:11:54.634913   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:11:57.914324   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/functional-647968/client.crt: no such file or directory
E0610 12:11:59.755832   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:12:09.997004   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
E0610 12:12:12.917402   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/no-preload-298179/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m0.128453669s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (120.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rc62z" [5024a93d-75ba-4d08-bbd6-8d323b438cb4] Running
E0610 12:12:30.477804   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005986998s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-491653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-491653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vcp9m" [5193f3f8-c5e5-4cd9-b16b-e111995063e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vcp9m" [5193f3f8-c5e5-4cd9-b16b-e111995063e6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004463451s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-491653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (81.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m21.615062806s)
--- PASS: TestNetworkPlugins/group/flannel/Start (81.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-491653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-491653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bcwrk" [4650a023-9220-432b-9ae5-3604ffe38a87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0610 12:13:11.438793   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-bcwrk" [4650a023-9220-432b-9ae5-3604ffe38a87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003469268s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-491653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-491653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-491653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9cmxm" [241b3f94-d1c8-4f8d-941b-dbcde6d40195] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9cmxm" [241b3f94-d1c8-4f8d-941b-dbcde6d40195] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003757856s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (98.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-491653 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m38.544416946s)
--- PASS: TestNetworkPlugins/group/bridge/Start (98.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-491653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kjmzw" [e51dda6d-3bee-48be-92e7-a194e40eee85] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004097753s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-491653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-491653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9v9pk" [52ccfb75-91a5-4f8c-98a1-28bd17dc4189] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-9v9pk" [52ccfb75-91a5-4f8c-98a1-28bd17dc4189] Running
E0610 12:14:33.359214   10758 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19046-3880/.minikube/profiles/old-k8s-version-166693/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003548142s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-491653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-491653 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-491653 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hjnz2" [96f7163e-aa95-4b23-83e9-a3778261ef6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hjnz2" [96f7163e-aa95-4b23-83e9-a3778261ef6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004020381s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-491653 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-491653 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (37/317)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.1/cached-images 0
15 TestDownloadOnly/v1.30.1/binaries 0
16 TestDownloadOnly/v1.30.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Olm 0
41 TestAddons/parallel/Volcano 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
254 TestStartStop/group/disable-driver-mounts 0.17
260 TestNetworkPlugins/group/kubenet 2.99
269 TestNetworkPlugins/group/cilium 3.21
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:871: skipping: crio not supported
--- SKIP: TestAddons/parallel/Volcano (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-036579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-036579
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-491653 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-491653" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-491653" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-491653

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-491653"

                                                
                                                
----------------------- debugLogs end: kubenet-491653 [took: 2.817030727s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-491653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-491653
--- SKIP: TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-491653 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-491653" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-491653

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-491653" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-491653"

                                                
                                                
----------------------- debugLogs end: cilium-491653 [took: 3.061774639s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-491653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-491653
--- SKIP: TestNetworkPlugins/group/cilium (3.21s)

                                                
                                    
Copied to clipboard